Introduction
This article comprehensively reviews deep insights and market analysis on generative AI from top VC firms including Sequoia Capital, A16Z, Lightspeed Venture Partners, and Madrona Venture. The article is organized by VC firm, detailing important reports and research findings from each institution, including technology evolution paths, market landscape analysis, application scenario exploration, and investment opportunity identification. By systematically reviewing these top VC perspectives, readers can fully understand the complete path of generative AI from technological breakthroughs to commercial applications, as well as future development trends and investment directions.
Sequoia Capital
As a global top-tier venture capital firm, Sequoia Capital has published multiple in-depth research reports in the generative AI field, analyzing development trends and commercial value from multiple dimensions including technology evolution, market landscape, and application scenarios.
Generative AI: A Creative New World
(September 19, 2022)
Generative AI is triggering a revolution in creativity and productivity. Its core lies in enabling machines to generate text, code, images, audio, and even 3D content through technologies like large language models (such as the GPT series) and diffusion models, gradually evolving from auxiliary tools to partners with “human-like creativity.” Technology development has gone through four stages: the small model analysis era before 2015, the scale race driven by Transformer architecture (2015-2022), the cost optimization and open source popularization stage (2022+), and the current explosion period characterized by vertical domain killer applications.
Typical applications have covered code generation (GitHub Copilot improves developer efficiency by 55%), artistic creation (Midjourney generates hundreds of millions in annual revenue), legal documents (Harvey custom LLM), marketing content generation, and are extending to complex scenarios like film production and industrial design. Key trends show the industry is transitioning from technology-driven (Act 1) to demand-driven (Act 2): early applications focused on demonstrating technical possibilities (such as simple text/image generation) face low user retention rates (median retention only 14%), while successful cases like Character AI (users average 2 hours daily) achieve value closure through deep workflow integration and multimodal interaction.
The technology frontier has entered the reasoning capability upgrade stage (Act o1), with OpenAI's Strawberry model simulating human deep thinking through “reasoning-time computation,” demonstrating AlphaGo-like decision-making capabilities in code and mathematics, but open-domain creation still awaits breakthroughs. Challenges and opportunities coexist: on one hand, computing bottlenecks, copyright disputes (such as Japan's determination that training data has no IP rights), and ethical risks continue to ferment; on the other hand, ecosystem differentiation between the model layer (OpenAI, Anthropic, etc.) and application layer accelerates, with industry solutions needing to combine domain-specific cognitive architectures (such as Factory's code review robot) to form moats of user networks and workflow stickiness. In the future, as marginal creation costs approach zero, generative AI is expected to unlock trillions of dollars in economic value, reshaping the entire industry chain from medical documents to film production.
The report details the core architecture of generative AI applications: generative AI apps are built on top of large models (such as GPT-3 or Stable Diffusion). As applications gain more user data, they can fine-tune models to improve model quality/performance for their specific problem space and reduce model size/costs. The report predicts that the best generative AI companies can generate sustainable competitive advantages by executing relentlessly on the flywheel between user engagement/data and model performance. To win, teams must get this flywheel going by 1) having exceptional user engagement → 2) turning more user engagement into better model performance (prompt improvements, model fine-tuning, user choices as labeled training data) → 3) using great model performance to drive more user growth and engagement.
AI Recruits a New Hybrid Workforce
(January 26, 2023)
This report deeply analyzes how AI is transforming office software suites and work methods. The report points out that AI models are getting better at generating human-level work, and now the hard part is bringing magical products to the office. AI is computation; best guesses based on statistics. It is work; joules of energy dissipated through the movement of information. Despite the flashy new veneer, AI is not a revolution in communication but in productivity. It's not the printing press or telegraph, it's the assembly line, the jet engine, technologies that produce work rather than transfer information.
The report details how document editors, spreadsheets, and presentation makers are being disrupted by AI. For document editors, the report notes that the big emerging opportunities for doc editors in the age of generative text is to innovate on the writing experience itself. AI can intervene in three distinct parts of the writing process: planning, drafting, and revising. Current generative systems focus on the first two. Most applications will draft sentences and paragraphs for you as a completion of your prompt. More sophisticated approaches might return an outline for a blog post based on a headline.
For spreadsheets, the report notes that bringing computational abilities to words and images is a new thing, but spreadsheets are computational to their core. Automating quantitative work in spreadsheets should be a slam dunk application for this new era. Most likely this will not come primarily through generative approaches but from more rigorous formal methods. Math is still math. The biggest disruption may be the merger of spreadsheets into document environments (see Notion and Coda) to create code notebooks for office workers.
The report emphasizes that enduring AI-native companies will integrate human and AI workers into new operating systems and hardware. The world of highly-distributed AI applications capable of producing human-level work products is very different from the winner-take-all mass distribution of the internet era. For organizations, the profusion of generative outputs will create obvious risks of inappropriate content but also less obvious ones like cultural drift. If they rely on third-party models supplied through applications, they risk losing control of their own message. Top brands will want to fine-tune their models for their voice and style while also hiring and retaining workers that can tell the difference.
AI 50
(April 11, 2023)
Sequoia Capital's AI 50 report lists the 50 most influential AI companies in 2023. This report showcases the rapid development and innovation vitality in the generative AI field, covering various segments from foundation models to application layers. The report reflects the rapid transformation of the AI industry from technology research to commercial applications, as well as innovation opportunities in various vertical fields.
A16Z
A16Z (Andreessen Horowitz) has published multiple research reports in the generative AI field, focusing on the gaming AI revolution and consumer applications, deeply analyzing application scenarios and market opportunities for generative AI in different vertical fields.
The Generative AI Revolution in Games
(November 17, 2022)
This report deeply analyzes how generative AI is radically transforming the gaming industry. The report points out that games are the most complex form of entertainment, in terms of the sheer number of asset types involved (2D art, 3D art, sound effects, music, dialog, etc.). Games are also the most interactive, with a heavy emphasis on real-time experiences. This creates a steep barrier to entry for new game developers, as well as a steep cost to produce a modern, chart-topping game. This also creates a tremendous opportunity for generative AI disruption.
The report details generative AI applications at various stages of game production: 2D Images (concept art, production art), 3D Assets (3D models, textures, animation), Level Design and World Building, Audio (sound effects, music, speech and dialog), NPCs or Player Characters (intelligent character interaction). The report predicts that the price of content will drop dramatically, going effectively to zero in some cases. One developer has told us that their time to generate concept art for a single image, start to finish, has dropped down from 3 weeks to a single hour: a 120-to-1 reduction.
Core predictions of the report include: learning how to use generative AI effectively will become a marketable skill; lowering barriers will result in more risk-taking and creative exploration; a rise in AI-assisted “micro game studios”; an increase in the number of games released each year; new game types created that were not possible before generative AI; value will accrue to industry specific AI tools, and not just foundational models; legal challenges are coming; programming will not be disrupted as deeply as artistic content, at least not yet.
The report also provides a detailed market map, listing various segments and key companies in the gaming AI field. The report emphasizes that this is an incredible time to be a game creator! Thanks in part to the tools described in this blog post, it has never been easier to generate the content needed to build a game, even if your game is as large as the entire planet! It's even possible to one day imagine an entire personalized game, created just for the player, based on exactly what the player wants.
How Are Consumers Using Generative AI?
(September 13, 2023)
This report analyzes the top 50 generative AI web products by monthly visits based on SimilarWeb traffic data (as of June 2023), and how these products have grown over time, and where growth is coming from. Core findings include: most leading products are built from the “ground up” around generative AI, with 80% of these websites being new; ChatGPT has a massive lead, representing 60% of monthly traffic to the entire top 50 list, with an estimated 1.6 billion monthly visits and 200 million monthly users.
The report found that LLM assistants (like ChatGPT) are dominant, but companionship and creative tools are on the rise. General LLM chatbots represent 68% of total consumer traffic to the top 50 list. However, two other categories have started to drive significant usage in recent months: AI companions (such as CharacterAI) and content generation tools (such as Midjourney and ElevenLabs). Within the broader content generation category, image generation is the top use case with 41% of traffic, followed by prosumer writing tools at 26%, and video generation at 8%.
The report notes that early “winners” have emerged, but most product categories are up for grabs. Good news for builders: despite the surge in interest in generative AI, in many categories there is not yet a runway success. The majority of companies on this list have no paid marketing (at least, that SimilarWeb is able to attribute). There is significant free traffic “available” via X, Reddit, Discord, and email, as well as word of mouth and referral growth. Consumers are willing to pay for GenAI, with 90% of companies on the list already monetizing, nearly all of them via a subscription model.
The report also analyzes mobile apps as a GenAI platform. Consumer AI products have, thus far, been largely browser-first, rather than app-first. Even ChatGPT took 6 months to launch a mobile app! Only 15 companies on the list currently have a live mobile app, and almost all of them see less than 10% of total monthly traffic come from their app versus the web. There are 3 notable exceptions: prosumer design studio PhotoRoom (estimated 88% of traffic on app), companion app CharacterAI (46% of traffic on app), and text-to-speech product Speechify (20% of traffic on app).
Lightspeed Venture Partners
Lightspeed Venture Partners focuses on the intersection of fintech and AI, publishing multiple research reports on Fintech x AI, deeply analyzing application scenarios and investment opportunities for AI in the financial field.
Fintech x AI: The Lightspeed View
(June 8, 2023)
This report deeply analyzes AI applications in the fintech field. The report points out that AI is not new in fintech. There is a long history of using artificial intelligence and machine learning in fraud, underwriting, and investing, especially as used by risk teams, lenders, and quant hedge funds. In fact, the Financial Crimes Enforcement Network (FinCEN) used forms of artificial intelligence back in the early 1990's to evaluate large volumes cash transactions to identify potential money laundering.
The report distinguishes between predictive AI and generative AI: Predictive AI excels at identifying patterns and trends in large datasets, enabling accurate forecasting and prediction. It is particularly well-suited for quantitative analysis, risk modeling, and market predictions based on historical data. Generative AI shines in creative problem-solving and uncovering hidden patterns that may not be evident through traditional predictive models. Its ability to generate new insights and solutions makes it valuable in tackling complex challenges and driving innovation in fintech.
The report proposes specific views on AI applications in fintech: if trained on general data, generative AI best serves as an inspiration starting point. Money is highly emotional. Spending or investing is a high intensity, often personal financial decision. Thus, a tool that more closely mimics speaking with a human will still hold tremendous value in the financial world. Companies building investing research tools and customer service chatbots are great applications. If a LLM is trained on domain specific knowledge, generative AI can be used for a broader set of tasks such as document automation, investing research, robo claims, and wealth advisory.
The report also details key learnings on building fintech AI: in terms of frameworks, companies building LLMs and AI interfaces can differentiate in three main areas: data sources (public v. proprietary, domain specific v. general), model parameters (training objectives, # parameters, tokenization efficiency), user experience (prompt interface / instructions—chain of thought). Fintech companies can utilize AI to deepen an existing advantage in one of 4 areas: cost to acquire, cost to underwrite, cost to serve, or cost of capital.
The report notes that the largest segment within fintech (in terms of number of tools built) today is AI-driven research tools (mainly for investing). One of the largest revenue generating segments within fintech today is vertical SaaS fintech applied to a legal use case (personal injury law / insurtech). The report also emphasizes the importance of building moats: data, user acquisition, technical depth, and user experience are all key factors.
Gaming X AI Market Map: The Infinite Power Of Play
(July 27, 2023)
Lightspeed Venture Partners' gaming AI market map details application scenarios and market opportunities for AI in the gaming industry. The report covers various levels from game development tools to in-game AI experiences, showcasing how AI is transforming game creation and game experiences.
Madrona Venture
Madrona Venture focuses on the generative AI tech stack and intelligent agents field, publishing multiple research reports on AI technology architecture, application frameworks, and intelligent agents, deeply analyzing the technical infrastructure and future development directions of generative AI.
A Wave of Personal Agents is Coming
(June 14, 2023)
This report deeply analyzes development trends in intelligent agents. The report points out that generative AI, particularly ChatGPT, has inspired an entire industry and cultural conversation through the almost magical experiences of generating images, text, and video with basic natural language prompts. This generative technology has produced some inspiring, even moving, early results, and while constantly impressed with applications that sit on top of these foundation models, Madrona continues to advance perspective on the rapidly changing generative app ecosystem and the game theory around which types of companies are more likely to win at various layers of the generative AI stack.
The report defines autonomous intelligent agents: they are systems designed to perform specific tasks with little to no human intervention. For example, say someone is shopping for a dress to wear at a wedding in Tahoe in August. With that prompt, an agent would make suggestions and curate options based on the users' preferences (brand, style) and constraints (inventory available, price, size). Once the shopper makes a selection, the agent would be able to complete the purchase and monitor shipping rather than redirect the shopper to a specific shopping site to complete the purchase themselves.
The report details challenges intelligent agents need to overcome: Model Memory (the model needs to learn from user questions, behavior, and preferences), Data (the application should connect to a data source that the application can access for tasks), Integrations (after the system receives information from the user, it needs to be able to integrate with external systems and execute actions), Compute (compute costs associated with foundation models can be high), Security and Authorization (models could be misused or harmful), UX & UI (the app should have a compelling user experience that is intuitive to the broader population), Distribution (the team should have a strategy for the application to reach target users and continually improve its dataset).
The report predicts that autonomous intelligent agents have the potential to bridge the gap between physical and digital consumer experiences and can lead to new, efficient online shopping experiences, such as shopping carts that automatically populate with our desired purchases replacing e-commerce catalogs, and no longer needing to endlessly scroll through hotel, restaurant, and other online options. Ultimately, the report believes connecting foundation models to systems of action to create personalized experiences and solve the most frustrating gaps in the digital consumer experience today will create immense value.
The Generative AI Stack: Making the Future Happen Faster
(June 1, 2023)
This report details various components of the generative AI tech stack. The report points out that there is just so much happening in AI right now! Applications that used to take years to develop are now being built over weekend hackathons. It's all a testament to the power of foundation models (we have not found the outer limits yet) and rapid innovation at the infrastructure layer to put that power in the hands of more developers.
The report divides the generative AI tech stack into four main parts: Application Frameworks (such as LangChain, Fixie, Microsoft's Semantic Kernel, Google Cloud's Vertex AI platform), Models (foundation models, hosting, training), Data (data loaders, vector databases, context window), Evaluation Platform (prompt engineering, experimentation, observability), Deployment (self-hosting, third-party services).
The report details the functions and market opportunities of each component. For application frameworks, the report notes that application frameworks have emerged to quickly absorb the storm of new innovations and rationalize them into a coherent programming model. They simplify the development process and allow developers to iterate quickly. Developers are using these frameworks to create applications that generate new content, create semantic systems that allow users to search content using natural language, and agents that perform tasks autonomously.
For the data part, the report emphasizes that LLMs are a powerful technology. But, they are limited to reasoning about the facts on which they were trained. That's constraining for developers looking to make decisions on the data that matters to them. Fortunately, there are mechanisms developers can use to connect and operationalize their data: data loaders, vector databases, context window (retrieval augmented generation is a popular technique for personalizing model outputs by incorporating data directly in the prompt).
The report also analyzes the importance of evaluation platforms: LLM developers face a tradeoff between model performance, inference cost, and latency. Developers can improve performance across all three vectors by iterating on prompts, fine-tuning the model, or switching between model providers. However, measuring performance is more complex due to the probabilistic nature of LLMs and the non-determinism of tasks. Fortunately, there are several evaluation tools to help developers determine the best prompts, provide offline and online experimentation tracking, and monitor model performance in production.
Game On in the Generative AI Stack
(March 20, 2023)
This report analyzes the competitive landscape and game theory in the generative AI stack. The report deeply explores how companies at different layers build competitive advantages, and which types of companies are more likely to win at various layers. The report emphasizes the importance of technical infrastructure, data advantages, user experience, and distribution strategies in building enduring AI companies.
Our View on the Foundation Model Stack
(January 27, 2023)
This report details the foundation model tech stack. The report deeply explores the technical architecture, training methods, deployment strategies, and market landscape of foundation models. The report analyzes competition between open-source models and proprietary models, as well as technical advantages and business strategies of different model providers. The report also explores the impact of foundation models on the application layer, and how to build successful applications based on foundation models.