AI NewsAnthropic is having a moment in the private markets; SpaceX could spoil the party

Anthropic is having a moment in the private markets; SpaceX could spoil the party

9:38 AM IST · April 4, 2026

Anthropic is having a moment in the private markets; SpaceX could spoil the party

Glen Anderson has been brokering trades in private company shares since 2010, back when the number of institutional investors focused on the late-stage private market could be counted on two hands. Today, he says, there are thousands. As president of the investment bank Rainmaker Securities, which focuses solely on private securities markets and facilitates transactions in roughly 1,000 stocks, Anderson has a front-row seat to one of the most nail-bitingly large moments in the history of the secondary market. And right now, he suggests, the narrative has three main characters: Anthropic, OpenAI, and SpaceX. The upshot: the storyline is more complicated than the headlines suggest. Anderson’s read on Anthropic is consistent with what Bloombergreportedearlier this week: demand for the company’s shares has become almost insatiable. Bloomberg quoted Ken Smythe, founder and CEO of Next Round Capital, saying that buyers had indicated to his outfit that they had $2 billion of cash ready to deploy into Anthropic, even as roughly $600 million in OpenAI shares that investors are trying to sell haven’t found takers. Anderson sees something similar at Rainmaker. “The hardest stock to source in our marketplace is Anthropic,” he told TechCrunch yesterday afternoon from his Miami home. “There’s just no sellers.” Part of what turbocharged that demand, Anderson argues, was Anthropic’svery public standoffwith the Department of Defense — a turn of events that initially seemed like bad news for the company but has wound up becoming a gift. “The app got more popular, people rallied around the company as kind of a hero, taking on big government,” he said. “I think it amplified the story and made it even more differentiated from OpenAI.” That distinction is becoming increasingly meaningful to investors navigating a market where, for years, the prevailing logic was to bet on everyone. Anderson notes that many institutional investors still want exposure to both Anthropic and OpenAI. “The jury’s still out,” he said, on which AI model will ultimately win – but the momentum, at least in the secondary market, has shifted. That doesn’t mean OpenAI has fallen off a cliff. Anderson pushes back slightly on a binary reading of the situation. “I wouldn’t say it’s a one-or-the-other conversation,” he said. But the excitement isn’t there. “It’s not nearly as vibrant a market as Anthropic right now,” he acknowledged. On valuation, Anderson broadly confirmed Bloomberg’s reporting that OpenAI shares on the secondary market are trading as if the company were valued at $765 billion — an appreciable discount to the company’s newest $852 billion primary-round valuation. He cautioned that he was working from memory, but said the Bloomberg figure was “in the right range.” OpenAI itself has tried to assert more control over secondary trading. “People should be extremely cautious of any firm that purports to have access to OpenAI equity, including through an SPV,” an OpenAI spokesperson told Bloomberg, noting the company had established authorized channels through banks, with no fees, to counter what it described as a high-fee broker model. Perhaps tellingly — at least for now — banks including Morgan Stanley and Goldman Sachs have begun offering OpenAI shares to their high-net-worth clients without charging carry fees, according to Bloomberg. Goldman, meanwhile, is charging its customary carry – often 15% to 20% of profits – for clients seeking Anthropic exposure. What none of this accounts for is SpaceX, which stands apart amid shifting sentiment around these other powerful brands. Anderson describes it as one of the only names in Rainmaker’s universe that never experienced the punishing correction that hit much of the private market between 2022 and 2024, a period when many private companies’ shares fell 60% to 70% from their peaks (after their valuations were run up just as fast). The rocket and satellite behemoth has “been pretty much consistently up and to the right,” Anderson said. Anderson, who, naturally, has an economic interest in flattering the company and its earlier backers, credits SpaceX’s management with disciplined pricing and not squeezing every last dollar out of each funding round or tender offer. “A lot of companies will fall for the temptation to maximize the price of their stock in every round,” he said. “The problem is that that doesn’t leave any room for error.” SpaceX, by contrast, played it conservatively, by “not getting too greedy,” and the payoff for earlier investors has been enormous. “You can imagine if someone got in in 2015 what kind of gain they’re sitting on right now,” said Anderson. To put a finer point on that comment: SpaceX was valued at roughly $12 billion in 2015, when Google and Fidelity jointly invested $1 billion in the company. Someone who got in at that price is now sitting on a gain of more than 100x, with the company valued at more than $1 trillion ahead of its planned IPO. That IPO is now imminent, seemingly. SpaceX filedconfidentiallythis week for an initial public offering, setting the stage for what could be one of the largest market debuts in history, with Elon Musk reportedly aiming to raise between $50 billion and $75 billion, possibly in June. Only Saudi Aramco’s 2019 debut, which valued the energy giant at $1.7 trillion, has come close. Unsurprisingly, the rumored filing has already changed the dynamics of the secondary market for SpaceX shares, according to Anderson. “Today, I saw a flood of SpaceX investors coming to me saying, ‘Can you give me SpaceX?’” he noted. “It’s been a very active buy side.” But supply is drying up. The closer a company gets to an IPO, the less incentive existing shareholders have to sell because they can see the liquidity event on the horizon. That’s where things get a little dicier for OpenAI and Anthropic. Both companies are reportedly exploring public offerings of their own and have signaled they could move this year. But SpaceX, by filing first, is about to test the market’s appetite in a major way, and Anderson suggested that whoever follows will be at a disadvantage. “SpaceX is going to soak up a lot of liquidity,” he said flatly. “There’s only so much money out there allocated to IPOs.” The first mover gets to the trough first; those who follow face both more scrutiny and, potentially, less capital. It’s a dynamic that plays out in every so-called vertical and from which the AI companies aren’t completely immune, despite the attention being showered on them right now. Time your IPO too early and you’re the one testing market receptivity. Wait for someone else to go first, and you may find the biggest checks have already been written. You can hear more of our interview with Anderson in the upcoming episode of theStrictlyVC Downloadpodcast, which drops every Tuesday. In the meantime, check out recent episodes, including those with Whoop CEO Will Ahmed and investor Bill Gurley.

read more

Latest AI News

View All News →
‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

Google has introduced Knowledge Catalog, a context engine to enhance data interpretation in multi-cloud environments.

3 hours ago

View

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

Netflix recently began rolling out a new way for viewers to search for shows and movies on its platform. While we can search for content online via voice dictation, it merely presents results based on keywords. However, the new native AI-based voice search tool will provide contextual search results, taking the intent of the user's query into account. Currently available to a small set of users in beta, the content streaming company is asking users to test the new functionality and provide feedback on how it can be refined, while also pointing out the bugs and issues. The company has yet to announce when the stable version of the AI search tool will be rolled out to a wider global user base.

11 hours ago

View

Voice AI in India is hard. Wispr Flow is betting on it anyway.

Voice AI in India is hard. Wispr Flow is betting on it anyway.

India’s internet users already rely heavily on voice notes, voice search, and multilingual messaging. Turning those habits into a scalable AI business, however, remains difficult because of the country’s linguistic complexity, mixed-language usage, and uneven monetization patterns.Wispr Flowis betting the opportunity is worth the challenge. The Bay Area-headquartered startup, which builds AI-powered voice input software, says India is now its fastest-growing market, even though voice-based AI products remain early and fragmented in the South Asian nation. That growth has pushed Wispr Flow to expand more aggressively for Indian users,beginning with Hinglish— a hybrid mix of Hindi and English commonly spoken by locals. The startup is also planning broader multilingual voice support, a local hiring push, and, eventually, lower pricing as it looks to expand beyond white-collar users and into Indian households. Earlier waves of voice technology in India —from digital assistantstoWhatsApp voice notes— largely revolved around convenience. AI startups such as Wispr Flow are now betting that generative AI can turn those habits into a broader computing layer. To make the product more relevant for Indian users, Wispr Flow began beta testing a Hinglish voice model earlier this year andlaunched on Android— India’sdominant mobile operating system— after initially debuting on Mac and Windows beforeexpanding to iOSin 2025. Co-founder and CEO Tanay Kothari told TechCrunch that the startup initially saw adoption in India largely among white-collar professionals such as managers and engineers, but it’s increasingly seeing broader usage patterns emerge, including among students and older users being onboarded by younger family members. India has emerged as Wispr Flow’s second-largest market after the U.S. in terms of both users and revenue, Kothari said, with growth accelerating following the startup’s recent India-focused push. The startup has seen faster growth following the rollout of Hinglish support, benefiting from the widespread habit among Indian users of mixing Hindi and English in everyday conversations, particularly as users began expanding beyond work-focused use cases into more personal communication. “The biggest thing is people are starting to use it more in personal apps,” Kothari said, pointing to messaging platforms such as WhatsApp and social media apps where users frequently switch between Hindi and English while speaking. Wispr Flow, Kothari said, was growing about 60% month over month in India earlier this year, but growth accelerated to around 100% following its recent India launch campaign. The startup last month rolled out abroader marketing pushin the country, including a launch video from Kothari and offline campaigns in Bengaluru aimed at introducing the product to more mainstream users. Kothari told TechCrunch that Wispr Flow plans to expand its multilingual voice support over the next 12 months, allowing users to switch between English and other Indian languages beyond Hindi while speaking. In December, the startupintroduced India-specific pricingat ₹320 (around $3.4) per month for annual plans, significantly lower than its standard $12 monthly pricing globally. The startup eventually wants to bring costs down even further — potentially to as low as ₹10–20 (around 10–20 cents) per month — as it looks to expand beyond white-collar and urban users. “I want every single person in the country to be able to use Wispr Flow, and that’s what we’re really building for,” Kothari said. “That’s going to happen slowly and steadily.” Earlier this year, Wispr Flow hired Nimisha Mehta to lead its India operations as it looks to expand its local presence. Kothari told TechCrunch the startup plans to grow to around 30 employees in India over the next year, building out consumer growth, partnerships, and enterprise teams alongside existing engineering and support functions. The startup currently has about 60 employees globally. Wispr Flow is not alone in viewing India as a key market for voice-based AI products. Companies including ElevenLabs have highlighted India as animportant growth marketforsome time. Similarly, local startups such as Gnani.ai, Smallest AI, and Bolna havecontinued attracting investor interestas voice-based AI tools gain wider adoption across consumer and business use cases. Nevertheless, turning voice AI into a mainstream consumer product in India remains challenging despite growing interest from startups and investors. “India is the ultimate stress test for voice AI,” Neil Shah, vice president of research at Counterpoint Research, told TechCrunch, adding that “linguistic, accent, and contextual friction” continue to slow wider adoption. Data shared with TechCrunch from Sensor Tower shows Wispr Flow was downloaded more than 2.5 million times globally between October 2025 and April 2026, with India accounting for 14% of installs during the period, making India its second-largest market by downloads (after, as mentioned, the U.S.). India, however, contributed only around 2% of Wispr Flow’s in-app purchase revenue during the same period, according to Sensor Tower. However, the startup remains largely desktop-driven globally. Wispr Flow’s usage in India, Kothari said, is currently split roughly 50:50 between desktop and mobile, compared with an 80:20 desktop-heavy mix in the U.S. Kothari said Wispr Flow sees strong repeat usage among its users, claiming roughly 70% retention after 12 months globally and in India. Moreover, the startup currently employs two full-time linguistics PhDs as it continues refining multilingual voice models and expanding support for additional Indian language combinations.

11 hours ago

View

So you’ve heard these AI terms and nodded along; let’s fix that

So you’ve heard these AI terms and nodded along; let’s fix that

Artificial intelligence is changing the world, and simultaneously inventing a whole new language to describe how it’s doing it. Spend five minutes reading about AI and you’ll run into LLMs, RAG, RLHF, and a dozen other terms that can make even very smart people in the tech world feel insecure. This glossary is our attempt to fix that. We update it regularly as the field evolves, so consider it a living document, much like the AI systems it describes. Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman once described AGI as the “equivalent of a median human that you couldhire as a co-worker.” Meanwhile,OpenAI’s charterdefines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry —so are experts at the forefront of AI research. An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’veexplained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Think of API endpoints as “buttons” on the back of a piece of software that other programs can press to make it do things. Developers use these interfaces to build integrations — for example, allowing one application to pull data from another, or enabling an AI agent to control third-party services directly without a human manually operating each interface. Most smart home devices and connected platforms have these hidden buttons available, even if ordinary users never see or interact with them. As AI agents grow more capable, they are increasingly able to find and use these endpoints on their own, opening up powerful — and sometimes unexpected — possibilities for automation. Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See:Large language model) This is a more specific concept that an “AI agent,” which means a program that can take actions on its own, step by step, to complete a goal. A coding agent is a specialized version applied to software development. Rather than simply suggesting code for a human to review and paste in, a coding agent can write, test, and debug code autonomously, handling the kind of iterative, trial-and-error work that typically consumes a developer’s day. These agents can operate across entire codebases, spotting bugs, running tests, and pushing fixes with minimal human oversight. Think of it like hiring a very fast intern who never sleeps and never loses focus — though, as with any intern, a human still needs to review the work. Although somewhat of a multivalent term, compute generally refers to the vitalcomputational powerthat allows AI models to operate. This type of processing fuels the AI industry, giving it the ability to train and deploy its powerful models. The term is often a shorthand for the kinds of hardware that provides the computational power — things like GPUs, CPUs, TPUs, and other forms of infrastructure that form the bedrock of the modern AI industry. A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See:Neural network) Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics,diffusion systems slowly “destroy” the structure of data— for example, photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usuallyviolatesthe terms of service of AI API and chat assistants. This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data. Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See:Large language model [LLM]) A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data — including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. The two models are essentially programmed to try to outdo each other. The generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. Hallucinations are contributing to a push toward increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See:Training] Large language models, or LLMs, are the AI models used by popular AI assistants, such asChatGPT,Claude,Google’s Gemini,Meta’s AI Llama,Microsoft Copilot, orMistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. (See:Neural network) Memory cache refers to an important process that boosts inference (which is the process by which AI works to generate a response to a user’s query). In essence, caching is an optimization technique, designed to make inference more efficient. AI is obviously driven by high-octane mathematical calculations and every time those calculations are made, they use up more power. Caching is designed to cut down on the number of calculations a model might have to run by saving particular calculations for future user queries and operations. There are different kinds of memory caching, although one of the more well-known isKV (or key value) caching. KV caching works in transformer-based models, and increases efficiency, driving faster results by reducing the amount of time (and algorithmic labor) it takes to generate answers to user questions. (See:Inference) A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See:Large language model [LLM]) Open source refers to software — or, increasingly, AI models — where the underlying code is made publicly available for anyone to use, inspect, or modify. In the AI world, Meta’s Llama family of models is a prominent example; Linux is the famous historical parallel in operating systems. Open source approaches allow researchers, developers, and companies around the world to build on top of one another’s work, accelerating progress and enabling independent safety audits that closed systems cannot easily provide. Closed source means the code is private — you can use the product but not see how it works, as is the case with OpenAI’s GPT models — a distinction that has become one of the defining debates in the AI industry. Parallelization means doing many things at the same time instead of one after another — like having 10 employees working on different parts of a project at the same time instead of one employee doing everything sequentially. In AI, parallelization is fundamental to both training and inference: modern GPUs are specifically designed to perform thousands of calculations in parallel, which is a big reason why they became the hardware backbone of the industry. As AI systems grow more complex and models grow larger, the ability to parallelize work across many chips and many machines has become one of the most important factors in determining how quickly and cost-effectively models can be built and deployed. Research into better parallelization strategies is now a field of study in its own right. RAMageddon is the fun new term for a not-so-fun trend that is sweeping the tech industry: an ever-increasing shortage of random access memory, or RAM chips, which power pretty much all the tech products we use in our daily lives. As the AI industry has blossomed, the biggest tech companies and AI labs — all vying to have the most powerful and efficient AI — are buying so much RAM to power their data centers that there’s not much left for the rest of us. And that supply bottleneck means that what’s left is getting more and more expensive. That includes industries like gaming (where major companies have had toraise prices on consolesbecause it’s harder to find memory chips for their devices), consumer electronics (where memory shortage could causethe biggest dip in smartphone shipmentsin more than a decade), and general enterprise computing (because those companies can’t get enough RAM for their own data centers). The surge in prices is only expected to stop after the dreaded shortage ends but, unfortunately, there’snot really much of a signthat’s going to happen anytime soon. Reinforcement learning is a way of training AI where a system learns by trying things and receiving rewards for correct answers — like training your beloved pet with treats, except the “pet” in this scenario is a neural network and the “treat” is a mathematical signal indicating success. Unlike supervised learning, where a model is trained on a fixed dataset of labeled examples, reinforcement learning lets a model explore its environment, take actions, and continuously update its behavior based on the feedback it receives. This approach has proven especially powerful for training AI to play games, control robots, and, more recently, sharpen the reasoning ability of large language models. Techniques like reinforcement learning from human feedback, or RLHF, are now central to how leading AI labs fine-tune their models to be more helpful, accurate, and safe. When it comes to human-machine communication, there are some obvious challenges — people communicate using human language, while AI programs execute tasks through complex algorithmic processes informed by data. Tokens bridge that gap: they are the basic building blocks of human-AI communication, representing discrete segments of data that have been processed or produced by an LLM. They are created through a process called tokenization, which breaks down raw text into bite-sized units a language model can digest, similar to how a compiler translates human language into binary code a computer can understand. In enterprise settings, tokens also determine cost — most AI companies charge for LLM usage on a per-token basis, meaning the more a business uses, the more it pays. So again, tokens are the small chunks of text — often parts of words rather than whole ones — that AI language models break language into before processing it; they are roughly analogous to “words” for the purposes of understanding AI workloads. Throughput refers to how much can be processed in a given period of time, so token throughput is essentially a measure of how much AI work a system can handle at once. High token throughput is a key goal for AI infrastructure teams, since it determines how many users a model can serve simultaneously and how quickly each of them receives a response. AI researcher Andrej Karpathy has described feeling anxious when his AI subscriptions sit idle — echoing the feeling he had as a grad student when expensive computer hardware wasn’t being fully utilized — a sentiment that captures why maximizing token throughput has become something of an obsession in the field. Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. Training can be expensive because it requireslotsof inputs, and the volumes required have been trending upwards — which is why hybrid approaches, such as fine-tuning a rules-based AI with targeted data, can help manage costs without starting entirely from scratch. [See:Inference] A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See:Fine tuning) Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output. Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Validation loss is a number that tells you how well an AI model is learning during training — and lower is better. Researchers track it closely as a kind of real-time report card, using it to decide when to stop training, when to adjust hyperparameters, or whether to investigate a potential problem. One of the key concerns it helps flag is overfitting, a condition in which a model memorizes its training data rather than truly learning patterns it can generalize to new situations. Think of it as the difference between a student who genuinely understands the material and one who simply memorized last year’s exam — validation loss helps reveal which one your model is becoming. This article is updated regularly with new information.

15 hours ago

View