Latest AI News

The largest orbital compute cluster is open for business
For all the hype about data centers in space, there just aren’t very many GPUs up there. As that starts to change, the near-term business of orbital compute is starting to take shape. The largest compute cluster currently in orbit was launched by Canada’s Kepler Communications in January, and boasts about 40 Nvidia Orin edge processors onboard 10 operational satellites, all linked together by laser communications links. The company now has 18 customers, and announced its newest on Monday — Sophia Space, a startup that will test the software for itsunique orbital computeronboard Kepler’s constellation. Experts expect that we won’t see large-scale data centers like those envisioned by SpaceX or Blue Origin until the 2030s. The first step will be processing data that is collected in orbit to improve the capabilities of space-based sensors used by private companies and government agencies. Kepler doesn’t see itself as a data center company, but as infrastructure for applications in space, CEO Mina Mitry tells TechCrunch. It wants to be a layer that provides network services for other satellites in space, or drones and aircraft in the sky below. Sophia, on the other hand, is developing passively-cooled space computers that could solve one of thekey challengesfor large-scale data centers in orbit: keeping powerful processors from overheating without having to build and launch heavy, expensive active-cooling systems. In the new partnership, Sophia will upload its proprietary operating system to one of Kepler’s satellites and attempt to launch and configure it across six GPUs on two spacecraft. That sort of activity is table stakes in a terrestrial data center, and this is the first time it will be attempted in orbit. Making sure the software works in orbit will be a key de-risking exercise for Sophia ahead of its first planned satellite launch in late 2027. For Kepler, the partnership helps prove the utility of its network. Right now, it is carrying and processing data uploaded from the ground, or collected by hosted payloads on its own spacecraft. But as the sector matures, the company expects to start linking up with third-party satellites to provide networking and processing services. Mitry says satellite companies are now planning future assets around this model, pointing to the benefits of offloading processing for more power-hungry sensors, like synthetic aperture radar. The U.S. military is a key customer for that kind of work as it develops a new missile defense system predicated on satellites detecting and tracking threats. Kepler has already demonstrated a space-to-air laser link in a demo for the U.S. government. That kind of edge processing — dealing with data where it is collected for faster responsiveness — is where orbital data centers will initially prove their value. That vision sets Sophia and Kepler apart from established space companies like SpaceX andBlue Origin, or startups likeStarcloudandAetherfluxthat are raising significant capital to focus on large-scale data centers with data center-style processors. “Because we have the belief it’s more inference than training, we want more distributed GPUs that do inference, rather than one superpower GPU that has the training workload capacity,” Mitry told TechCrunch. “If this thing consumes kilowatts of power and you’re only running at 10% of the time, then that’s not super helpful. In our case, our GPUs are running 100% of the time.” And once these technologies are proven in orbit, well, anything can happen. Sophia CEO Rob DeMillo points out that Wisconsin adopted a ban on data center construction last week, something some lawmakers in Congress are also pushing. Anything that limits data centers on Earth is, in their eyes, making the space-based alternative more attractive. “There’s no more data centers in this country,” Demillo mused. “It’s gonna get weird from here.”
View

Accenture Invests in Replit, Partners to Scale Vibe-Coded Software
Replit offers a cloud-based platform that integrates coding environments, AI-assisted development, collaboration tools, and hosting.
View

Is India Ready for Drone Warfare? The Weakest Link is the Obvious
With the use of drones in the conflicts in Ukraine and West Asia, India is now stepping up its defence adoption.
View

Ramp’s AI Coworker Turns Employee Workflows Into Reusable Skills
“When one person on a team figures out a better workflow, everyone on that team gets it and gets more productive.”
View

New Research Finds Seven ‘Deadly’ Vulnerabilities in AI Benchmarks
A study from UC Berkeley showed how easy it is to game popular AI model evaluation tests.
View

Trump officials may be encouraging banks to test Anthropic’s Mythos model
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned bank executives for a meeting this week where they encouraged the executives to use Anthropic’s new Mythos model to detect vulnerabilities,according to Bloomberg. Indeed, while JPMorgan Chase was the only bank listed as one of the initial partner organizations with access to the model, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing Mythos as well. Anthropicannounced the model this weekbut said it would be limiting access for now, in part because Mythos — despite not being trained specifically for cybersecurity — is too good at finding security vulnerabilities. (Others suggested this washypeor simplya smart enterprise sales strategy.) The report is particularly surprising since Anthropic iscurrently battling the Trump administration in courtover the Department of Defense’sdesignation of Anthropic as a supply-chain risk; that designation came after negotiations fell apart over the company’s efforts to limit how its AI models can be used by the government. Meanwhile,the Financial Times reportsthat U.K. financial regulators are also discussing the risk posed by Mythos.
View

Apple reportedly testing four designs for upcoming smart glasses
Apple plans to sell its first smart glasses in 2027, with a possible unveiling at the end of this year,according to Bloomberg’s Mark Gurman. Gurman has been reporting steadily on the evolution of the company’ssmart glasses strategy, but now he has more details about how they’ll look — he said Apple is testing four designs, and could ultimately launch with some or all of them. Those designs reportedly include a large rectangular frame, a slimmer rectangular frame (similar to the glasses worn by CEO Tim Cook), a larger oval or circular frame, and a smaller oval or circular frame. Apple is also considering different colors including black, ocean blue, and light brown. In some ways, these glasses are a step back from an ambitious plan that once called for Apple to launch a variety of mixed and augmented reality devices — a plan that already stumbled with product delays andthe lackluster reception of the Vision Pro. These glasses, meanwhile, sound closer to theMeta’s Ray-Ban glasses. They won’t have any displays, but will allow users to take photos and videos (Apple is reportedly oval camera lenses), answer phone calls, play music, and interact withthe long-promised Siri upgrade.
View

At the HumanX conference, everyone was talking about Claude
At the HumanX AI conference in San Francisco this week, thousands of techies descended upon the city’s Moscone Center, where discussion focused on the ways agentic AI is changing the business. Agents, which automate business and coding tasks, have begun to be deployed across industries — largely through enterprise and consumer-focused chatbots. Naturally, I wanted to know which chatbot was the most popular, and I consistently heard one name most often: Claude. Anthropic got shoutouts in many of the panels held throughout the week, but it also was a topic of discussion with the vendors I spoke to while perusing the convention room floor. The chatbot I didn’t hear a lot about? ChatGPT. One of the vendors I spoke to made a point of telling me that he and his team used Claude a lot, while he felt ChatGPT and OpenAI had gone downhill — or, as the internet likes to say, “fell off.” Lately, that does not appear to be a particularly unique take. Indeed, it’s not clear what will cure the perception that, despite a recent$122 billion funding roundand itsupcoming IPO, OpenAI has lost its footing—or, at the very least, seems increasingly unsure of what the next step is. Part of the problem may be a perception that the company lacks focus. Last month, OpenAI abandoned a number of long simmering side-quests (including its AIvideo generator Soraand a troubled plan to launcha “sexy” version of ChatGPT), locking in instead on the focuses of business and coding services. In the meantime, a number of developments, includinga recent New Yorker piecethat questioned whether the company’s CEO, Sam Altman, was trustworthy or not, have spurred a certainamount of negative buzzaround the company. The company’s work with the Trump administration hasn’t won it any friends either, nor has its decision to inject advertising into ChatGPT. During one of HumanX’s discussions, Sierra co-founder and CEO Bret Taylor (who is also the chairman of the board of OpenAI) defended Altman when asked by Alex Heath about the New Yorker profile. “I think Sam is one of the most visible leaders and executives in the world,” said Taylor. “If you want to seek out detractors for him, you’ll find them, and they’ll be very vocal about it,” he said, adding: “I think Sam’s remarkable. I think he’s a remarkable leader of AI, and I really trust his character as someone who’s worked with him.” The controversies and vacillations can make OpenAI’s seem reactive rather than strategic, as if it’s simply responding to events rather than shaping them. That said, when it comes to prominence and revenue, OpenAI and Anthropic are neck and neck — or at least, that’s how it looks, with some data suggesting thatAnthropic is catching up among business users. The Wall Street Journalrecently analyzedtheir finances, showing that the two companies were “the fastest-growing businesses in the history of tech.” In that sense, perhaps “falling off” for OpenAI just means it’s not the undisputed champ anymore. It has competition — which, in most industries, is normal. If anything, it remains clear that OpenAI is determined to do what it takes to remain dominant. This week, the company announceda new $100 subscription tier to ChatGPTwith substantially more access to Codex, its coding tool. The move seems clearly designed to spur broader use of tool while hopefully peeling users away from Claude Code. During a HumanX discussion with Bloomberg’s reporter Rachel Metz, OpenAI CTO of B2B applications Srinivas Narayanan noted how quickly the technological landscape has been changing. “We are in this incredible moment in technology, where every month, and sometimes every day, we are all looking forward to something new,” Narayanan said. Pointing to agentic coding as an example, he added, “We knew AI was going to impact software engineering, people have been using assistive coding over the last year, but even in just the last few months, the entire field has changed.” Agentic accomplishments may be a big focus of the tech community currently, since other applications for AI (creative uses, for example) haven’t really panned out yet. Still, the amount of work that companies have begun to offload onto their new little automated helpers is somewhat surprising—and, as Narayanan noted in his remarks, it has all happened in a relatively short period of time. In such an unpredictable environment, the future is still wide open.
View

From LLMs to hallucinations, here’s a simple guide to common AI terms
Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altmanrecentlydescribed AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile,OpenAI’s charterdefines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry —so are experts at the forefront of AI research. An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’veexplained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See:Large language model) Although somewhat of a multivalent term, compute generally refers to the vitalcomputational powerthat allows AI models to operate. This type of processing fuels the AI industry, giving it the ability to train and deploy its powerful models. The term is often a shorthand for the kinds of hardware that provides the computational power — things like GPUs, CPUs, TPUs, and other forms of infrastructure that form the bedrock of the modern AI industry. A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See:Neural network) Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics,diffusion systems slowly “destroy” the structure of data— for example, photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usuallyviolatesthe terms of service of AI API and chat assistants. This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data. Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See:Large language model [LLM]) A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time. The GAN structure is set up as a competition (hence “adversarial”) – with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God (yet). Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See:Training] Large language models, or LLMs, are the AI models used by popular AI assistants, such asChatGPT,Claude,Google’s Gemini,Meta’s AI Llama,Microsoft Copilot, orMistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat. (See:Neural network) Memory cache refers to an important process that boosts inference (which is the process by which AI works to generate a response to a user’s query). In essence, caching is an optimization technique, designed to make inference more efficient. AI is obviously driven by high-octane mathematical calculations and every time those calculations are made, they use up more power. Caching is designed to cut down on the number of calculations a model might have to run by saving particular calculations for future user queries and operations. There are different kinds of memory caching, although one of the more well-known isKV (or key value) caching. KV caching works in transformer-based models, and increases efficiency, driving faster results by reducing the amount of time (and algorithmic labor) it takes to generate answers to user questions. (See:Inference) A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See:Large language model [LLM]) RAMageddon is the fun new term for a not-so-fun trend that is sweeping the tech industry: an ever-increasing shortage of random access memory, or RAM chips, which power pretty much all the tech products we use in our daily lives. As the AI industry has blossomed, the biggest tech companies and AI labs — all vying to have the most powerful and efficient AI — are buying so much RAM to power their data centers that there’s not much left for the rest of us. And that supply bottleneck means that what’s left is getting more and more expensive. That includes industries like gaming (where major companies have had toraise prices on consolesbecause it’s harder to find memory chips for their devices), consumer electronics (where memory shortage could causethe biggest dip in smartphone shipmentsin more than a decade), and general enterprise computing (because those companies can’t get enough RAM for their own data centers). The surge in prices is only expected to stop after the dreaded shortage ends but, unfortunately, there’snot really much of a signthat’s going to happen anytime soon. Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch. [See:Inference] When it comes to human-machine communication, there are some obvious challenges. People communicate using human language, while AI programs execute tasks and respond to queries through complex algorithmic processes that are informed by data. In their simplest definition, tokens represent the basic building blocks of human-AI communication, in that they are discrete segments of data that have either been processed or produced by an LLM. Tokens are created via a process known as “tokenization,” which breaks down raw data and refines it into distinct units that are digestible to an LLM. Similar to how a software compiler translates human language into binary code that a computer can digest, tokenization interprets human language for an AI program via their user queries so that it can prepare a response. There are several different kinds of tokens — including input tokens (the kind that must be generated in response to a human user’s query), output tokens (the kind that are generated as the LLM responds to the human’s request), and reasoning tokens, which involve longer, more intensive tasks and processes that occur as part of a user request. With enterprise AI, token usage also determines costs. Since tokens are equivalent to the amount of data being processed by a model, they have also become the means by which the AI industry monetizes its services. Most AI companies charge for LLM usage on a per-token-basis. Thus, the more tokens a business burns as it uses an AI program (ChatGPT, for example), the more money it will have to pay its AI service provider (OpenAI). A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See:Fine tuning) Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output. Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. This article is updated regularly with new information.
View

Snowflake’s 9,100 AI Customers Signal the Real Indian Enterprise Shift
Snowflake is not competing with model providers. It is ensuring that whatever model an enterprise uses, the output is reliable.
View

Sam Altman responds to ‘incendiary’ New Yorker article after attack on his home
OpenAI CEO Sam Altmanpublished a blog poston Friday evening responding to both an apparent attack on his home andan in-depth New Yorker profileraising questions about his trustworthiness. Early Friday morning, someone allegedly threw a Molotov cocktail at Altman’s San Francisco home. No one was hurt in the incident, and a suspect was later arrested at OpenAI headquarters, where he was threatening to burn down the building,according to the SF Police Department. While the police have not identified the suspect publicly, Altman noted that the incident came a few days after “an incendiary article” was published about him. He said someone had suggested that the article’s publication “at a time of great anxiety about AI” could make things “more dangerous” for him. “I brushed it aside,” Altman said. “Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.” The article in question was a lengthy investigative piece written by Ronan Farrow (who won a Pulitzer for reporting that revealed many of the sexual abuse allegations around Harvey Weinstein) and Andrew Marantz (who’s written extensively about technology and politics). Farrow and Marantz said that during interviews with more than 100 people who have knowledge of Altman’s business conduct, most described Altman as someone with “a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart.” Echoingother journalists who have profiled Altman, Farrow and Marantz suggested that many sources raised questions about his trustworthiness, with one anonymous board member saying he combines “a strong desire to please people, to be liked in any given interaction” with “a sociopathic lack of concern for the consequences that may come from deceiving someone.” In his response, Altman said that looking back, he can identify “a lot of things I’m proud of and a bunch of mistakes.” Among the mistakes, he said, is a tendency towards “being conflict-averse,” which he said has “caused great pain for me and OpenAI.” “I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company,” Altman said, presumably referring tohis removal and rapid reinstatement as OpenAI CEOback in 2023. “I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission.” He added, “I am sorry to people I’ve hurt and wish I had learned more faster.” Altman also acknowledged that there seems to be “so much Shakespearean drama between the companies in our field,” which he attributed to a “‘ring of power’ dynamic” that “makes people do crazy things.” Of course, the correct way to deal with thering of poweris to destroy it, so Altman added, “I don’t mean that [artificial general intelligence] is the ring itself, but instead the totalizing philosophy of ‘being the one to control AGI.’” His proposed solution is “to orient towards sharing the technology with people broadly, and for no one to have the ring.” Altman concluded by saying that he welcomes “good-faith criticism and debate,” while reiterating his belief that “technological progress can make the future unbelievably good, for your family and mine.” “While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally,” he said.
View

India Hits 27 Million Developers on GitHub
Over 2 million developers joined GitHub from India this year.
View
