Back to News

FutureX Perspective: Insights from NVIDIA GTC 2024


Last month, the NVIDIA 2024 Global Technology Conference (GTC) captivated the global AI community, filling every seat. The FutureX team had the privilege of joining tech leaders from around the world at this pivotal event, which highlighted NVIDIA's technological milestones and the future trajectory of AI.

NVIDIA's founder and CEO, Jensen Huang, delivered a keynote, "Don’t Miss This Transformative Moment in AI," introducing the cutting-edge AI GPU and Blackwell computing platform. This innovation promises unparalleled performance enhancements for AI enterprises and is heralded by Huang as "the engine of a new industrial revolution." The rollout includes a variety of solutions such as GPUs, AI superchips, servers, and cloud services. Furthermore, NVIDIA launched a new inference microservice (NIM) to foster the development and deployment of enterprise-grade generative AI applications, a significant stride in their "software servicing hardware" strategy. The conference also showcased NVIDIA’s latest developments in sectors like humanoid robotics, automotive, pharmaceuticals, quantum computing, and climate prediction.

Our observations - What next for the LLM:

1. Reasoning: GTC participants generally believe that reasoning capabilities are a key direction for the development of AI models. This means that the models need to advance beyond reader-level understanding to perform deeper logical reasoning and analysis.

2. Workflow Integration: AI models need to better integrate into existing workflows. This involves the model's ability to input (input) and adapt (adapt), meaning the models should understand and adapt to different work environments and needs.

3. Continuous Evolution: The development of AI models is a continuous process with new experiments and directions constantly emerging. For instance, some are attempting to input non-verbal behaviors such as facial expressions into models to enhance their understanding and responsiveness.

Challenges and opportunities at the model level:

1. Scalability (Scaling Out): In addition to the scalability of models, optimizing the structure is also an aspect that deserves attention. This means that models need to be more efficient, smaller, and have lower latency in order to run on edge devices, such as Mixed Reality (MR) devices.

2. Infrastructure (Infra): Nvidia's emphasis on infrastructure is particularly evident, as reflected in its support for large model services. For instance, infrastructure companies serving large AI models like Coreweave and Lambda Lab have shown good development momentum. At the same time, those infrastructure companies dedicated to serving end customers face more challenges. This indicates that in the enterprise application field, AI models and services have not yet reached a perfected stage and still require further optimization and improvement.

Tool Development: 

Although large models still have much room for iteration, the progress on the tool side is not clear. For instance, RAG (Retrieval-Augmented Generation) might be replaced by new innovations. Tools need to be able to connect models to applications and quickly adapt to changes in models while maintaining cost-effectiveness.

Industry-specific Applications:

We are bullish on companies utilizing their rich data reservoirs to create nearly product-ready solutions, such as SaaS for managing enterprise knowledge bases, showing market success. Additionally, we monitor startups offering innovative solutions, like those specializing in coding platforms, which are poised for rapid growth. Here are a few interesting startups we've observed:

1. Finance: AI optimizes loan pre-approval processes, enhancing efficiency and generating stable revenues.

2. Video Editing and Marketing: AI-driven technologies that auto-generate and edit content based on narratives support marketing efforts and generate significant business value.

3. BI and AI Integration: By enabling direct data interaction through NLP, this tech simplifies analytics, empowering both operational staff and executives with easy access to information.