GenerativeAIExamples

Форк
0
1 строка · 577.1 Кб
1
[{"question": "Who is the CEO of Monarch Tractor?", "gt_answer": "The CEO of Monarch Tractor is Praveen Penmetsa.", "gt_context": "Cheers to AI: Monarch Tractor Launches First Commercially Available Electric, \u2018Driver Optional\u2019 Smart Tractor\n\nStartup will deliver the first six of its NVIDIA Jetson-driven Founder Series MK-V tractors to leading wine, spirits, and beer producer Constellation Brands\n\nAuthor: Scott Martin\n\nLivermore, Calif., renowned for research and vineyards, is plowing in a new distinction: the birthplace of the first commercially available smart tractor.\n\nLocal startup Monarch Tractor has announced the first of six Founder Series MK-V tractors are rolling off the production line at its headquarters. Constellation Brands, a leading wine and spirits producer and beer importer, will be the first customer given keys at a launch event today.\n\nThe debut caps a two-year development sprint since Monarch, founded in 2018, hatched plans to deliver its smart tractor, complete with the energy-efficient NVIDIA Jetson edge AI platform. The tractor combines electrification, automation, and data analysis to help farmers reduce their carbon footprint, improve field safety, streamline farming operations, and increase their bottom lines.\n\nThe MK-V tractor cuts energy costs and diesel emissions, while also helping reduce harmful herbicides, which are expensive and deplete the soil.\n\n\u201cWith precision ag, autonomy and AI, data will decrease the volume of chemicals used, which is good for the soil, good for the farmer from a profitability standpoint, and good for the consumer,\u201d said Praveen Penmetsa, CEO of Monarch Tractor.\n\nThe delivery of MK-V tractors to Constellation Brands will be followed with additional tractor shipments to family farms and large corporate customers, according to the company.\n\nMonarch is a member of the NVIDIA Inception program, which provides startups with technology support and AI platforms guidance.\n\nMonarch Tractor founders include veterans of Silicon Valley\u2019s EV scene who worked together at startup Zoox, now Amazon owned. Carlo Mondavi, from the Napa Valley Mondavi winery family, is a sustainability-focused vintner and chief farming officer. Mark Schwager, former Tesla Gigafactory chief, is president; Zachary Omohundro, a robotics Ph.D. from Carnegie Mellon, is CTO; Penmetsa is an autonomy and mobility engineer. \u201cThe marriage of NVIDIA accelerated computing with Jetson edge AI on our Monarch MK-V has helped our customers reduce the use of unneeded herbicides with our cutting-edge, zero-emission tractor \u2013 this revolutionary technology is helping our planet\u2019s soil, waterways and biodiversity,\u201d said Carlo Mondavi.\n\n\u201cThe marriage of NVIDIA accelerated computing with Jetson edge AI on our Monarch MK-V has helped our customers reduce the use of unneeded herbicides with our cutting-edge, zero-emission tractor \u2013 this revolutionary technology is helping our planet\u2019s soil, waterways and biodiversity,\u201d said Carlo Mondavi.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMDEvbW9uZGF2aS1tb25hcmNoLXNtYXJ0LWVsZWN0cmljLWpldHNvbi10cmFjdG9yLw==.pdf"}, {"question": "What can the Monarch tractor do with crop data?", "gt_answer": "The tractor collects and analyzes crop data daily and can process data from current and next-generation implements equipped with sensors and imaging. This data can be used for real-time implement adjustments, long-term yield estimates, current growth stages, and other plant and crop health metrics.", "gt_context": "Penmetsa likens the revolutionary new tractor to paradigm shifts in PCs and smartphones, enablers of world-changing applications. Monarch\u2019s role, he said, is as the hub to enable smart implements \u2014 precision sprayers, harvesters and more \u2014 for computer vision applications to help automate farming.\n\nIn 2021, Monarch launched pilot test models for commercial use at Wente Vineyards, also based in Livermore. The trial at Wente compared its energy usage to that of a diesel tractor, noting Monarch saved more than $2,600 in annual expenses.\n\nMonarch has raised more than $110 million in funding. Strategic investors include Japanese auto parts maker Musashi Seimitsu Industry Co; CNH Industrial, an agricultural equipment maker; and VST Tillers Tractors, an India-based equipment maker and dealer of tractors and implements.\n\nIt recently signed a contract manufacturing agreement with Hon Hai Technology Group Foxconn to build the MK-V and its battery packs at the Mahoning Valley, Ohio, plant.\n\nAs a wave of AI sweeps farming , developers are working to support more sustainable farming practices.\n\nThe NVIDIA Jetson platform provides energy-efficient computing to the MK-V, which offers advances in battery performance.\n\nTapping into six NVIDIA Jetson Xavier NX SOMs (system on modules), Monarch\u2019s Founder Series MK-V tractors are essentially roving robots packing supercomputing.\n\nMonarch has harnessed Jetson to deliver tractors that can safely traverse rows within agriculture fields using only cameras. \u201cThis is important in certain agriculture environments because there may be no GPS signal,\u201d said Penmetsa. \u201cIt\u2019s also crucial for safety as the Monarch is intended for totally driverless operation.\u201d\n\nThe Founder Series MK-V runs two 3D cameras and six standard cameras. With the six Jetson edge AI modules on board, it can run models for multiple farming tasks when paired with different implements.\n\nSupporting more sustainable farming practices, computer vision applications are available to fine-tune with transfer learning for the Monarch platform to develop precision spraying and other options.\n\nMonarch offers a core of main applications to assist farms with AI, available in a software-as-a-service model on its platform.\n\nThe Founder Series MK-V has some basic functions on its platform as well, such as sending alerts when on a low charge or there\u2019s an unidentified object obstructing a path. It will also shut down from spraying if its camera-based vision platform identifies a human.\n\nThe tractor collects and analyzes crop data daily and can process data from current and next-generation implements equipped with sensors and imaging. This data can be used for real-time implement adjustments, long-term yield estimates, current growth stages and other plant and crop health metrics.\n\nWider availability of the tractor begins a new chapter in improved farming practices.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMDEvbW9uZGF2aS1tb25hcmNoLXNtYXJ0LWVsZWN0cmljLWpldHNvbi10cmFjdG9yLw==.pdf"}, {"question": "How has the marriage of NVIDIA accelerated computing with Jetson edge AI helped customers?", "gt_answer": "The marriage of NVIDIA accelerated computing with Jetson edge AI has helped customers reduce the use of unneeded herbicides with the cutting-edge, zero-emission tractor.", "gt_context": "Wider availability of the tractor begins a new chapter in improved farming practices.\n\n\u201cThe marriage of NVIDIA accelerated computing with Jetson edge AI on our Monarch MK-V has helped our customers reduce the use of unneeded herbicides with our cutting-edge, zero-emission tractor \u2013 this revolutionary technology is helping our planet\u2019s soil, waterways and biodiversity,\u201d said Mondavi.\n\nLearn more about NVIDIA Isaac platform for robotics and apply to join NVIDIA Inception .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/12/01/mondavi-monarch-smart-electric-jetson-tractor/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMDEvbW9uZGF2aS1tb25hcmNoLXNtYXJ0LWVsZWN0cmljLWpldHNvbi10cmFjdG9yLw==.pdf"}, {"question": "What is Project Helix?", "gt_answer": "Project Helix is a joint initiative by Dell Technologies and NVIDIA to make it easier for businesses to build and use generative AI models on premises.", "gt_context": "Dell Technologies and NVIDIA Introduce Project Helix for Secure, On-Premises Generative AI\n\nProject Helix makes it easy for enterprises to build and deploy trustworthy generative AI\n\nDell and NVIDIA infrastructure and software include built-in data security for on-premises generative AI applications\n\nDell Technologies World\u2014Dell Technologies (NYSE: DELL) and NVIDIA (NASDAQ: NVDA) announce a joint initiative to make it easier for businesses to build and use generative AI models on premises to quickly and securely deliver better customer service, market intelligence, enterprise search, and a range of other capabilities.\n\nProject Helix will deliver a series of full-stack solutions with technical expertise and pre-built tools based on Dell and NVIDIA infrastructure and software. It includes a complete blueprint to help enterprises use their proprietary data and more easily deploy generative AI responsibly and accurately.\n\n\u201cProject Helix gives enterprises purpose-built AI models to more quickly and securely gain value from the immense amounts of data underused today,\u201d said Jeff Clarke, vice chairman and co-chief operating officer, Dell Technologies. \u201cWith highly scalable and efficient infrastructure, enterprises can create a new wave of generative AI solutions that can reinvent their industries.\u201d\n\n\u201cWe are at a historic moment, when incredible advances in generative AI are intersecting with enterprise demand to do more with less,\u201d said Jensen Huang, founder and CEO, NVIDIA. \u201cWith Dell Technologies, we\u2019ve designed extremely scalable, highly efficient infrastructure that enables enterprises to transform their business by securely using their own data to build and operate generative AI applications.\u201d\n\nProject Helix simplifies enterprise generative AI deployments with a tested combination of optimized hardware and software, all available from Dell. This delivers the power to convert enterprise data into smarter, higher value outcomes, while maintaining data privacy. These solutions will help companies quickly deploy customized AI applications that drive trusted decisions from their own data to grow and scale their businesses.\n\nBlueprint for On-Premises Generative AI\n\nProject Helix will support the complete generative AI lifecycle \u2013 from infrastructure provisioning, modeling, training, fine- tuning, application development and deployment, to deploying inference and streamlining results. The validated designs help enterprises quickly build on-premises generative AI infrastructure at scale.\n\nDell PowerEdge servers, such as the PowerEdge XE9680 and PowerEdge R760xa, are optimized to deliver performance for generative AI training and AI inferencing. The combination of Dell servers with NVIDIA\u00ae H100 Tensor Core GPUs and NVIDIA Networking form the infrastructure backbone for these workloads. Customers can pair this infrastructure with resilient and scalable unstructured data storage, including Dell PowerScale and Dell ECS Enterprise Object Storage.", "document": "RGVsbCBUZWNoIDUvMjMvMjMucGRm.pdf"}, {"question": "What software does the Project Helix include?", "gt_answer": "The Project Helix includes Dell server and storage software, Dell CloudIQ software for observability, and NVIDIA AI Enterprise software.", "gt_context": "With all Dell Validated Designs, customers can use the enterprise features of Dell server and storage software, with observability through Dell CloudIQ software. Project Helix also includes NVIDIA AI Enterprise software to provide tools for customers as they move through the AI lifecycle. NVIDIA AI Enterprise includes more than 100 frameworks, pretrained models and development tools such as the NVIDIA NeMo\u2122 large language model framework and NeMo Guardrails software for building topical, safe and secure generative AI chatbots.\n\nProject Helix includes security and privacy built into foundational components, such as Secured Component Verification. Protecting data on-premises reduces inherent risk and helps companies meet regulatory requirements.\n\n\u201cCompanies are eager to explore the opportunities that generative AI tools enable for their organizations, but many aren\u2019t sure how to get started,\u201d said Bob O\u2019Donnell, president and chief analyst, TECHnalysis Research. \u201cBy putting together a complete hardware and software solution from trusted brands, Dell Technologies and NVIDIA are offering enterprises a head start to building and refining AI-powered models that can leverage their own company\u2019s unique assets and create powerful, customized tools.\u201d\n\nAvailability\n\nDell Validated Designs based on the Project Helix initiative will be available through traditional channels and APEX flexible consumption options, beginning in July 2023.\n\nAdditional Resources\n\nLearn more about AI at Dell Technologies.\n\nAbout Dell Technologies\u202f Dell Technologies (NYSE: DELL) helps organizations and individuals build their digital future and transform how they work, live and play. The company provides customers with the industry\u2019s broadest and most innovative technology and services portfolio for the data era.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the industrial metaverse. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.", "document": "RGVsbCBUZWNoIDUvMjMvMjMucGRm.pdf"}, {"question": "What are some of the factors that could cause actual results to differ materially?", "gt_answer": "Some factors that could cause actual results to differ materially include global economic conditions, reliance on third parties, technological development and competition, changes in consumer preferences or demands, and unexpected loss of performance of products or technologies when integrated into systems.", "gt_context": "Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features and availability of our products, collaborations, services, and technologies, including Project Helix, H100 Tensor Core GPUs, NVIDIA Networking, NVIDIA AI Enterprise, NeMo, and NeMo Guardrails; the benefits, impact, performance, features, and availability of NVIDIA\u2019s joint initiative with Dell Technologies; advances in generative AI intersecting with enterprise demand to do more with less are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies and Dell are trademarks of Dell Inc. or its subsidiaries. NVIDIA, the NVIDIA logo and NeMo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.\n\nAllie Courtney NVIDIA Corporation +1-408-706-8995 acourtney@nvidia.com Dell Technologies Media Relations Media.Relations@Dell.com", "document": "RGVsbCBUZWNoIDUvMjMvMjMucGRm.pdf"}, {"question": "Who was among the first to show the power of deep neural networks trained on massive datasets?", "gt_answer": "Ilya Sutskever", "gt_context": "AI Opener: OpenAI\u2019s Sutskever in Conversation With Jensen Huang In a fireside chat at GTC, NVIDIA\u2019s founder and CEO and OpenAI co-founder Ilya Sutskever discussed GPT-4, ChatGPT, deep learning\u2019s future and how it all began.\n\nAuthor: Rick Merritt\n\nLike old friends catching up over coffee, two industry icons reflected on how modern AI got its start, where it\u2019s at today and where it needs to go next.\n\nJensen Huang, founder and CEO of NVIDIA, interviewed AI pioneer Ilya Sutskever in a fireside chat at GTC . The talk was recorded a day after the launch of GPT-4, the most powerful AI model to date from OpenAI, the research company Sutskever co-founded.\n\nThey talked at length about GPT-4 and its forerunners, including ChatGPT. That generative AI model, though only a few months old, is already the most popular computer application in history.\n\nTheir conversation touched on the capabilities, limits and inner workings of the deep neural networks that are capturing the imaginations of hundreds of millions of users.\n\nCompared to ChatGPT, GPT-4 marks a \u201cpretty substantial improvement across many dimensions,\u201d said Sutskever, noting the new model can read images as well as text.\n\n\u201cIn some future version, [users] might get a diagram back\u201d in response to a query, he said.\n\n\u201cThere\u2019s a misunderstanding that ChatGPT is one large language model, but there\u2019s a system around it,\u201d said Huang.\n\nIn a sign of that complexity, Sutskever said OpenAI uses two levels of training.\n\nThe first stage focuses on accurately predicting the next word in a series. Here, \u201cwhat the neural net learns is some representation of the process that produced the text, and that\u2019s a projection of the world,\u201d he said.\n\nThe second \u201cis where we communicate to the neural network what we want, including guardrails \u2026 so it becomes more reliable and precise,\u201d he added.\n\nWhile he\u2019s at the swirling center of modern AI today, Sutskever was also present at its creation.\n\nIn 2012, he was among the first to show the power of deep neural networks trained on massive datasets. In an academic contest, the AlexNet model he demonstrated with AI pioneers Geoff Hinton and Alex Krizhevsky recognized images faster than a human could.\n\nHuang referred to their work as the Big Bang of AI .\n\nThe results \u201cbroke the record by such a large margin, it was clear there was a discontinuity here,\u201d Huang said.\n\nPart of that breakthrough came from the parallel processing the team applied to its model with GPUs.\n\n\u201cThe ImageNet dataset and a convolutional neural network were a great fit for GPUs that made it unbelievably fast to train something unprecedented,\u201d Sutskever said.\n\nThat early work ran on a few GeForce GTX 580 GPUs in a University of Toronto lab. Today, tens of thousands of the latest NVIDIA A100 and H100 Tensor Core GPUs in the Microsoft Azure cloud service handle training and inference on models like ChatGPT.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjIvc3V0c2tldmVyLW9wZW5haS1ndGMv.pdf"}, {"question": "What was Sutskever's belief about scaling?", "gt_answer": "Sutskever had a strong belief that bigger is better and a goal at OpenAI was to scale.", "gt_context": "\u201cIn the 10 years we\u2019ve known each other, the models you\u2019ve trained [have grown by] about a million times,\u201d Huang said. \u201cNo one in computer science would have believed the computation done in that time would be a million times larger.\u201d\n\n\u201cI had a very strong belief that bigger is better, and a goal at OpenAI was to scale,\u201d said Sutskever.\n\nAlong the way, the two shared a laugh.\n\n\u201cHumans hear a billion words in a lifetime,\u201d Sutskever said.\n\n\u201cDoes that include the words in my own head,\u201d Huang shot back.\n\n\u201cMake it 2 billion,\u201d Sutskever deadpanned.\n\nThey ended their nearly hour-long talk discussing the outlook for AI.\n\nAsked if GPT-4 has reasoning capabilities, Sutskever suggested the term is hard to define and the capability may still be on the horizon.\n\n\u201cWe\u2019ll keep seeing systems that astound us with what they can do,\u201d he said. \u201cThe frontier is in reliability, getting to a point where we can trust what it can do, and that if it doesn\u2019t know something, it says so,\u201d he added.\n\n\u201cYour body of work is incredible \u2026 truly remarkable,\u201d said Huang in closing the session. \u201cThis has been one of the best beyond Ph.D. descriptions of the state of the art of large language models,\u201d he said.\n\nTo get all the news from GTC, watch the keynote below.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/22/sutskever-openai-gtc/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjIvc3V0c2tldmVyLW9wZW5haS1ndGMv.pdf"}, {"question": "What tools do the designers at Trek Bicycle use for their computer-aided design workflows?", "gt_answer": "The designers use graphics-intensive applications tools such as Adobe Substance 3D, Cinema 4D, KeyShot, Redshift, and SOLIDWORKS.", "gt_context": "Design Speed Takes the Lead: Trek Bicycle Competes in Tour de France With Bikes Developed Using NVIDIA GPUs\n\nTeam uses RTX technology to accelerate product design, iterate more quickly and run realistic computational fluid dynamics simulations to build world-class bicycles.\n\nAuthor: Nicole Castro\n\nNVIDIA RTX is spinning new cycles for designs. Trek Bicycle is using GPUs to bring design concepts to life.\n\nThe Wisconsin-based company, one of the largest bicycle manufacturers in the world, aims to create bikes with the highest-quality craftsmanship. With its new partner Lidl, an international retailer chain, Trek Bicycle also owns a cycling team, now called Lidl-Trek . The team is competing in the annual Tour de France stage race on Trek Bicycle\u2019s flagship lineup, which includes the Emonda , Madone and Speed Concept . Many of the team\u2019s accessories and equipment, such as the wheels and road race helmets, were also designed at Trek.\n\nBicycle design involves complex physics \u2014 and a key challenge is balancing aerodynamic efficiency with comfort and ride quality. To address this, the team at Trek is using NVIDIA A100 Tensor Core GPUs to run high-fidelity computational fluid dynamics (CFD) simulations, setting new benchmarks for aerodynamics in a bicycle that\u2019s also comfortable to ride and handles smoothly.\n\nThe designers and engineers are further enhancing their workflows using NVIDIA RTX technology in Dell Precision workstations, including the NVIDIA RTX A5500 GPU , as well as a Dell Precision 7920 running dual RTX A6000 GPUs.\n\nTo kick off the product design process, the team starts with user research to generate early design concepts and develop a range of ideas. Then, they build prototypes and iterate the design as needed.\n\nTo improve performance, the bikes need to feel a certain way, whether riders are taking it on the road or the trail. So Trek spends a lot of time with athletes to figure out where to make critical changes, including tweaks to geometry and the flexibility of the frame and taking the edge off of bumps.\n\nThe designers use graphics-intensive applications tools for their computer-aided design workflows, including Adobe Substance 3D, Cinema 4D, KeyShot, Redshift and SOLIDWORKS. For CFD simulations, the Trek Performance Research team uses Simcenter STAR-CCM+ from Siemens Digital Industries Software to take advantage of the GPU processing capabilities .\n\nNVIDIA RTX GPUs provided Trek with a giant leap forward for design and engineering. The visualization team can easily tap into RTX technology to iterate quicker and show more options in designs. They can also use Cinema 4D and Redshift with RTX to produce high-quality renderings and even to visualize different designs in near real time.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTAvdHJlay1iaWN5Y2xlLXRvdXItZGUtZnJhbmNlLWdwdXMv.pdf"}, {"question": "What tool does Trek Bicycle use to optimize the performance of their bikes?", "gt_answer": "Trek Bicycle uses Simcenter STAR-CCM+ to optimize the performance of each bike.", "gt_context": "Michael Hammond, the lead for digital visual communications at Trek Bicycle, explains the importance of having time for iterations. \u201cThe faster we can render an image or animation, the faster we can improve it,\u201d he said. \u201cBut at the same time, we don\u2019t want to lose details or spend time recreating models.\u201d\n\nWith the help of the RTX A5500, Trek\u2019s digital visual team can push past creative limits and reach the final design much faster. \u201cOn average, the RTX GPU performs 12x faster than our network rendering, which is on CPU cores,\u201d said Hammond. \u201cFor a render that takes about two hours to complete on our\n\nnetwork, it only takes around 10-12 minutes on the RTX A5500 \u2014 that means I can do 12x the iterations, which leads to better quality rendering and animation in less time.\u201d\n\nOver the past decade, adoption of CFD has grown as a critical tool for engineers and equipment designers because it allows them to gain better insights into the behavior of their designs. But CFD is more than an analysis tool \u2014 it\u2019s used to make improvements without having to resort to time-consuming and expensive physical testing for every design. This is why Trek has integrated CFD into its product development workflows.\n\nThe aerodynamics team at Trek relies on Simcenter STAR-CCM+ to optimize the performance of each bike. To provide a comfortable ride and smooth handling while achieving the best aerodynamic performance, the Trek engineers designed the latest generation Madone to use IsoFlow , a unique feature designed to increase rider comfort while reducing drag.\n\nThe Simcenter STAR-CCM+ simulations benefit from the speed of accelerated GPU computing , and it enabled the engineers to cut down simulation runtimes by 85 days, as they could run CFD simulations 4-5x faster on NVIDIA A100 GPUs compared to their 128-core CPU-based HPC server.\n\nThe team can also analyze more complex physics in CFD to better understand how the air is moving in real-world unsteady conditions.\n\n\u201cNow that we can run higher fidelity and more accurate simulations and still meet deadlines, we are able to reduce wind tunnel testing time for significant cost savings,\u201d said John Davis, the aerodynamics lead at Trek Bicycle. \u201cWithin the first two months of running CFD on our GPUs, we were able to cancel a planned wind tunnel test due to the increased confidence we had in simulation results.\u201d\n\nLearn more about Trek Bicycle and GPU-accelerated Simcenter STAR-CCM+ .\n\nAnd join us at SIGGRAPH , which runs from Aug. 6-10, to see the latest technologies shaping the future of design and simulation.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/07/10/trek-bicycle-tour-de-france-gpus/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTAvdHJlay1iaWN5Y2xlLXRvdXItZGUtZnJhbmNlLWdwdXMv.pdf"}, {"question": "What are the key barriers to widespread adoption of AI models?", "gt_answer": "MosaicML has identified two key barriers to widespread adoption: the difficulty of coordinating a large number of GPUs to train a model and the costs associated with this process.", "gt_context": "MosaicML Helps AI Users Boost Accuracy, Cut Costs and Save Time\n\nAuthor: Brian Caulfield\n\nStartup MosaicML is on a mission to help the AI community improve prediction accuracy, decrease costs and save time by providing tools for easy training and deployment of large AI models.\n\nIn this episode of NVIDIA\u2019s AI Podcast , host Noah Kravitz speaks with MosaicML CEO and co-founder Naveen Rao about how the company aims to democratize access to large language models .\n\nMosaicML, a member of NVIDIA\u2019s Inception program , has identified two key barriers to widespread adoption: the difficulty of coordinating a large number of GPUs to train a model and the costs associated with this process.\n\nMosaicML was in the news earlier this month when Databricks announced an agreement to acquire MosaicML for $1.3 billion.\n\nMaking training of models accessible is key for many companies that need control over model behavior, respect data privacy and iterate fast to develop new products based on AI.\n\nJules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games\n\nA postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb \u2014 right down to the finger motions \u2014 with their minds.\n\nOverjet\u2019s Ai Wardah Inam on Bringing AI to Dentistry\n\nOverjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists\u2019 offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.\n\nImmunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs\n\nLuis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.\n\nThe AI Podcast is now available through Amazon Music . Additionally, you can also get the AI Podcast through iTunes , Google Podcasts , Google Play , Castbox , DoggCatcher, Overcast , PlayerFM , Pocket Casts, Podbay , PodBean , PodCruncher, PodKicker, Soundcloud , Spotify , Stitcher and TuneIn .\n\nMake the AI Podcast better. Have a few minutes to spare? Fill out this listener survey .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/07/12/mosaicml/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTIvbW9zYWljbWwv.pdf"}, {"question": "What topics will be covered in the auto sessions at GTC?", "gt_answer": "The auto sessions at GTC will cover topics such as using AI in teaching professional racing, improving camera perception with AI, rethinking electronic architecture in EVs, natural language processing in automotive research, and the impact of AI and system architectures on autonomous vehicle development.", "gt_context": "Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC Innovations in generative AI, simulation, accelerated computing and more are advancing safer, more efficient mobility.\n\nAuthor: Danny Shapiro\n\nNovel AI technologies are generating images, stories and, now, new ways to imagine the automotive future.\n\nAt NVIDIA GTC , a global conference for the era of AI and the metaverse running online March 20-23, industry luminaries working on these breakthroughs will come together and share their visions to transform transportation.\n\nThis year\u2019s slate of in-depth sessions includes leaders from automotive, robotics, healthcare and other industries, as well as trailblazing AI researchers.\n\nHeadlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse , a platform for creating and operating metaverse applications, in a keynote address on Tuesday, March 21, at 8 a.m. PT.\n\nConference attendees will have plenty of opportunities to network and learn from NVIDIA and industry experts about the technologies powering the next generation of automotive.\n\nHere\u2019s what to expect from auto sessions at GTC :\n\nThe entire automotive industry is being transformed by AI and metaverse technologies, whether they\u2019re used for design and engineering, manufacturing, autonomous driving or the customer experience.\n\nSpeakers from these areas will share how they\u2019re using the latest innovations to supercharge development: Sacha Vra\u017ein, director of autonomous driving R&D; at Rimac Technology, discusses how the supercar maker is using AI to teach any driver how to race like a professional on the track.\n\nToru Saito, deputy chief of Subaru Lab at Subaru Corporation, walks through how the automaker is improving camera perception with AI, using large-dataset training on GPUs and in the cloud.\n\nTom Xie, vice president at ZEEKR, explains how the electric vehicle company is rethinking the electronic architecture in EVs to develop a software-defined lineup that is continuously upgradeable.\n\nLiz Metcalfe-Williams, senior data scientist, and Otto Fitzke, machine learning engineer at Jaguar Land Rover, cover key learnings from the premium automaker\u2019s research into natural language processing to improve knowledge and systems, and to accelerate the development of high-quality, validated, cutting-edge products.\n\nMarco Pavone, director of autonomous vehicle research; Sanja Fidler, vice president of AI research; and Sarah Tariq, vice president of autonomous vehicle software at NVIDIA, show how generative AI and novel, highly integrated system architectures will radically change how AVs are designed and developed.\n\nIn addition to sessions from industry leaders, GTC attendees can access talks on the latest NVIDIA DRIVE technologies led by in-house experts.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMTYvYWktbWV0YXZlcnNlLXNoYXBpbmctYXV0b21vdGl2ZS1pbmR1c3RyeS1ndGMv.pdf"}, {"question": "What topics are covered in NVIDIA DRIVE Developer Days?", "gt_answer": "Topics include high-definition mapping, AV simulation, synthetic data generation for testing and validation, enhancing AV safety with in-system testing, and multi-task models for AV perception.", "gt_context": "NVIDIA DRIVE Developer Days consist of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, these talks will highlight the newest DRIVE features and how to apply them.\n\nTopics include high-definition mapping, AV simulation, synthetic data generation for testing and validation, enhancing AV safety with in-system testing, and multi-task models for AV perception.\n\nAccess these virtual sessions and more by registering free to attend and see the technologies generating the intelligent future of transportation.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/02/16/ai-metaverse-shaping-automotive-industry-gtc/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMTYvYWktbWV0YXZlcnNlLXNoYXBpbmctYXV0b21vdGl2ZS1pbmR1c3RyeS1ndGMv.pdf"}, {"question": "Who recently joined an Omniverse livestream to demonstrate their workflow using Unreal Engine and Omniverse?", "gt_answer": "Abdelrazik Maghata, aka MR GFX", "gt_context": "Epic Benefits: Omniverse Connector for Unreal Engine Saves Content Creators Time and Effort Updates bring improved compatibility and workflow enhancements to USD, enabling faster, more efficient workflow automation.\n\nAuthor: Pooya Ghobadpour\n\nContent creators using Epic Games\u2019 open, advanced real-time 3D creation tool, Unreal Engine, are now equipped with more features to bring their work to life with NVIDIA Omniverse , a platform for creating and operating metaverse applications.\n\nThe Omniverse Connector for Unreal Engine \u2019s 201.0 update brings significant enhancements to creative workflows using both open platforms.\n\nThe Unreal Engine Omniverse Connector 201.0 release delivers improvements in import, export and live workflows, as well as updated software development kits.\n\nNew features include:\n\nAlignment with Epic\u2019s USD libraries and USDImporter plug-in : Improved compatibility between Omniverse and Epic\u2019s Universal Scene Description (USD) libraries and USDImporter plug-in make it easier to transfer assets between the two platforms.\n\nPython 3.9 scripts with Omniverse URLs : Unreal Engine developers and technical artists can access Epic\u2019s built-in Python libraries by running Python 3.9 scripts with Omniverse URLs, which link to files on Omniverse Nucleus servers, helping automate tasks.\n\nSkeletal mesh blendshape import to morph targets : The Unreal Engine Connector 201.0 now allows users to import skeletal mesh blendshapes into morph targets, or stored geometry shapes that can be used for animation. This eases development and material work on characters that use NVIDIA Material Definition Language ( MDL ), reducing the time it takes to share character assets with other artists.\n\nUsdLuxLight schema compatibility : Improved compatibility of Unreal Engine with the UsdLuxLight schema \u2014 the blueprint used to define data that describes lighting in USD \u2014 makes it easier for content creators to work with lighting in Omniverse.\n\nArtists and game content creators are seeing notable improvements to their workflows thanks to this connector update.\n\nDeveloper and creator Abdelrazik Maghata, aka MR GFX on YouTube, recently joined an Omniverse livestream to demonstrate his workflow using Unreal Engine and Omniverse. Maghata explained how to animate a character in real time by connecting the Omniverse Audio2Face generative AI-powered application to Epic\u2019s MetaHuman framework in Unreal Engine.\n\nMaghata, who\u2019s been a content creator on YouTube for 15 years, uses his platform to teach others about the benefits of Unreal Engine for their 3D workflows. He\u2019s recently added Omniverse into his repertoire to build connections between his favorite content creation tools.\n\n\u201cOmniverse will transform the world of 3D,\u201d he said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMjEvZXBpYy1iZW5lZml0cy1vbW5pdmVyc2UtY29ubmVjdG9yLXVucmVhbC1lbmdpbmUv.pdf"}, {"question": "What has the Unreal Engine Connector done for Jae Solina's creative process?", "gt_answer": "The Unreal Engine Connector has greatly improved Jae Solina's workflow efficiency and increased productivity.", "gt_context": "\u201cOmniverse will transform the world of 3D,\u201d he said.\n\nOmniverse ambassador and short-film phenom Jae Solina often uses the Unreal Engine Connector in his creative process, as well. The connector has greatly improved his workflow efficiency and increased productivity by providing interoperability between his favorite tools, Solina said.\n\nGetting connected is simple. Learn how to accelerate creative workflows with the Unreal Engine Omniverse Connector by watching this video:\n\nAt the recent NVIDIA GTC conference, the Omniverse team hosted many sessions spotlighting how creators can enhance their workflows with generative AI, 3D SimReady assets and more. Watch for free on demand .\n\nPlus, join the latest Omniverse community challenge, running through the end of the month. Use the Unreal Engine Omniverse Connector and share your creation \u2014 whether it\u2019s fan art, a video-game character or even an original game \u2014 on social media using the hashtag #GameArtChallenge for a chance to be featured on channels for NVIDIA Omniverse ( Twitter , LinkedIn , Instagram ) and NVIDIA Studio ( Twitter , Facebook , Instagram ). Are you up for a challenge? n\n\nFrom now until April 30, share your video-game inspired work in our #GameArtChallenge . n @rafianimates is making his very own #AR game and shared some WIPs of the characters #MadeInOmniverse with @VoxEdit & #MagicaVoxel . pic.twitter.com/pppDNarNk4\n\n\u2014 NVIDIA Omniverse (@nvidiaomniverse) March 9, 2023\n\nGet started with NVIDIA Omniverse by downloading the standard license free , or learn how Omniverse Enterprise can connect team s. Developers can get started with these Omniverse resources .\n\nTo stay up to date on the platform, subscribe to the newsletter and follow NVIDIA Omniverse on Instagram , Medium and Twitter . Check out the Omniverse forums , Discord server , Twitch and YouTube channels.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/04/21/epic-benefits-omniverse-connector-unreal-engine/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMjEvZXBpYy1iZW5lZml0cy1vbW5pdmVyc2UtY29ubmVjdG9yLXVucmVhbC1lbmdpbmUv.pdf"}, {"question": "How does cloud streaming address the power limitations of all-in-one headsets for enterprise workflows?", "gt_answer": "Cloud streaming allows professionals to run high-quality XR workflows from powerful computational resources at the edge, and stream these experiences to any location. This means users can develop software in the cloud or use all-in-one headsets to access large enterprise content remotely.", "gt_context": "Cloud Streams and Photorealistic Scenes: Four Technologies That Elevate XR Experiences The scope of immersive realities is expanding with advanced streaming, photorealistic rendering, AI and collaboration technologies.\n\nAuthor: David Weinstein\n\nMany organizations are using extended reality (XR) to deliver realistic immersive environments \u2014 whether enabling users to collaborate on designs for electric race cars , or helping audiences interact with nature through a digital world .\n\nNext-generation immersive technologies are becoming more accessible, and the latest breakthroughs in graphics and AI are expanding the capabilities of XR. These four technologies are setting new standards in the XR ecosystem: cloud streaming, collaboration tools, photorealistic rendering and AI.\n\nAll-in-one headsets are becoming increasingly popular because they enable an untethered virtual reality experience. But they typically can\u2019t provide the power that\u2019s needed for enterprise workflows, which often include complex simulations and detailed visualizations that contain millions of polygons.\n\nCloud streaming addresses this challenge by enabling professionals to run high-quality XR workflows from powerful computational resources at the edge, and stream these experiences to any location.\n\nWith cloud streaming, handheld devices or low-powered headsets can securely tap into heavy, complex workloads. This means users can develop software in the cloud or use all-in-one headsets to access large enterprise content \u2014 all remotely.\n\nNVIDIA CloudXR and Lenovo ThinkReality VRX are two solutions making cloud streaming more attainable.\n\nCloudXR provides professionals with enhanced flexibility and portability, as it allows teams to stream the most powerful VR and AR applications from the cloud or data center to virtually any device. This means professionals can explore graphics-intensive, immersive environments on a headset, tablet or smartphone.\n\nThe Lenovo ThinkReality VRX was built with enterprise use cases in mind, using CloudXR to deliver high-quality, immersive, GPU-powered XR experiences.\n\nReal-time collaboration is an important part of design and development workflows. As more teams work from different locations, XR gives users the opportunity to collaborate virtually with their coworkers, minimizing turnaround times and making review cycles faster.\n\nAdvanced tools like integrated bidirectional audio can bring real-time collaboration to teams, allowing users to communicate throughout their immersive experiences. Being able to conduct real-time design reviews together opens endless possibilities for collaboration across different XR workflows.\n\nProject teams can also work together on the same virtual models through CloudXR and Autodesk VRED on Amazon Web Services . This lets anyone deploy CloudXR and Autodesk VRED to enter a photorealistic, immersive environment and seamlessly interact with large 3D models and scenes.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMTcveHItdGVjaG5vbG9naWVzLw==.pdf"}, {"question": "What role does photorealism play in creating graphics for virtual worlds and the metaverse?", "gt_answer": "Photorealism is critical in industries such as architecture, manufacturing, and automotive design. It is important for rendered graphics to be as close to reality as possible to create immersive and realistic experiences in virtual worlds and the metaverse.", "gt_context": "Enterprise workflows often require ultra-high-fidelity graphics. This is why photorealism is critical for industries like architecture, manufacturing and automotive design \u2014 rendered graphics require immense accuracy and must be as close to reality as possible.\n\nAnd with the growth of virtual worlds, digital twins and the metaverse , photorealism plays an important role in creating graphics to bring these simulated worlds to life. To experience the metaverse at full potential, users need to have the best graphics and the most powerful workstations.\n\nNVIDIA RTX technology helps professionals develop and deliver the highest-fidelity visualizations for these immersive and virtual experiences. The latest NVIDIA RTX professional GPUs offer a combination of speed, performance and massive memory that allows teams to easily handle large, complex models in photorealistic environments.\n\nTeams across the world can easily work together within real-time, photorealistic, 3D simulated worlds using NVIDIA Omniverse , a platform for building and connecting custom 3D pipelines. Within the platform, NVIDIA Omniverse Create XR is available for users to navigate and interact with their scenes through their immersive VR and AR devices.\n\nWith AI, the future of XR will be accelerated even further. The integration of AI and XR will help solutions scale past current limits and allow users to interact in immersive environments just as they would in the real world. Accurately representing immersive environments and making interactions as natural as possible will improve users\u2019 comfort level and productivity.\n\nNVIDIA\u2019s Project Aurora is an example of a purpose-built platform where AI-driven virtual assistance will be integrated in XR environments. Project Aurora is a hardware and software platform that simplifies the deployment of enterprise XR solutions onto on-premises networks.\n\nAnother example of AI enhancing XR is NVIDIA\u2019s Project Mellon , where conversational speech can dictate commands in an immersive experience.\n\nAdditionally, AI avatars will pave the way for creating engaging, interactive XR experiences. AI will enable 3D characters to see, hear, understand and communicate with people.\n\nCreators and developers can bring these intelligent avatars to life with NVIDIA Omniverse Avatar Cloud Engine (ACE) , a suite of cloud-native AI microservices that make it easier to build and deploy virtual assistants and digital humans. Omniverse ACE delivers all the AI building blocks necessary to create, customize and deploy interactive avatars.\n\nWith the power of AI integrated in XR, professionals and developers can create immersive experiences that are more realistic and intelligent than ever.\n\nLearn more about NVIDIA XR technologies , and catch up on the latest GTC session to learn more about Project Aurora .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/17/xr-technologies/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMTcveHItdGVjaG5vbG9naWVzLw==.pdf"}, {"question": "What is the collaboration between Tata Group and NVIDIA?", "gt_answer": "Tata Group and NVIDIA are collaborating to deliver AI computing infrastructure and platforms for developing AI solutions.", "gt_context": "Tata Partners With NVIDIA to Build Large-Scale AI Infrastructure\n\nState-of-the-Art AI Supercomputer to Provide Infrastructure-as-a-Service and Platform for AI Services\n\nNVIDIA today announced an extensive collaboration with Tata Group to deliver AI computing infrastructure and platforms for developing AI solutions. The collaboration will bring state-of-the-art AI capabilities within reach to thousands of organizations, businesses and AI researchers, and hundreds of startups in India.\n\nThe companies will work together to build an AI supercomputer powered by the next-generation NVIDIA\u00ae GH200 Grace Hopper Superchip to achieve performance that is best in class.\n\n\u201cThe global generative AI race is in full steam,\u201d said Jensen Huang, founder and CEO of NVIDIA. \u201cData centers worldwide are shifting to GPU computing to build energy-efficient infrastructure to support the exponential demand for generative AI.\n\n\u201cWe are delighted to partner with Tata as they expand their cloud infrastructure service with NVIDIA AI supercomputing to support the exponential demand of generative AI startups and processing of large language models.\u201d Huang said.\n\nTata Communications and NVIDIA will develop an AI cloud in India aimed at providing critical infrastructure that enables computing\u2019s next lifecycle. Tata Communications\u2019 robust global network combined with the AI cloud will empower enterprises to transfer data across the AI cloud at high speeds, enabling them to effectively bring the AI cloud to the doorstep of every enterprise.\n\nTCS will utilize the AI infrastructure and capabilities to build and process generative AI applications. The NVIDIA partnership will further enable TCS in collaborating with its customers to drive reimagination with an AI-first approach. Additionally, TCS will upskill its 600,000-strong workforce leveraging the partnership.\n\nThis partnership will also catalyze the AI-led transformation across Tata Group companies ranging from manufacturing to consumer businesses.\n\nCommenting on the collaboration with NVIDIA, N. Chandrasekaran, chairman of Tata Sons, said: \u201cThe advancements in AI have made focus on AI a central priority in governments, industries and society at large. The impact of AI and machine learning is going to be profound across industries and every aspect of our lives. This is a key transformational trend of the decade and every company must prepare to make this AI transition. Our partnership with NVIDIA will democratize access to AI infrastructure, accelerate build-out of AI solutions and enable upgradation of AI talent at scale. Tata Group\u2019s presence across sectors coupled with NVIDIA\u2019s deep capabilities offers numerous opportunities for collaboration to advance India\u2019s AI ambition.\u201d", "document": "VGF0YSA5LzgvMjMucGRm.pdf"}, {"question": "When was the Tata Group founded?", "gt_answer": "The Tata Group was founded in 1868.", "gt_context": "About the Tata Group Founded by Jamsetji Tata in 1868, the Tata Group is a global enterprise, headquartered in India, comprising 30 companies across ten verticals. The group operates in more than 100 countries across six continents, with a mission \u201cTo improve the quality of life of the communities we serve globally, through long-term stakeholder value creation based on Leadership with Trust.\u201d\n\nTata Sons is the principal investment holding company and promoter of Tata companies. Sixty-six percent of the equity share capital of Tata Sons is held by philanthropic trusts, which support education, health, livelihood generation and art and culture.\n\nIn 2022-23, the revenue of Tata companies, taken together, was $150 billion (INR 12 trillion). These companies collectively employ over 1 million people.\n\nEach Tata company or enterprise operates independently under the guidance and supervision of its own board of directors. There are 29 publicly listed Tata enterprises with a combined market capitalisation of $300 billion (INR 24trillion) as on July 31, 2023.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: NVIDIA\u2019s collaboration with Tata", "document": "VGF0YSA5LzgvMjMucGRm.pdf"}, {"question": "What are the benefits of NVIDIA's products, services, and technologies?", "gt_answer": "The benefits of NVIDIA's products, services, and technologies include energy-efficient infrastructure to support the exponential demand for generative AI, improved processing of large language models, and enhanced performance when integrated into systems.", "gt_context": "Group, including the benefits and impact thereof; the benefits and impact of NVIDIA\u2019s products, services, and technologies, including the GH200 Grace Hopper Superchip; the global generative AI race being in full steam; data centers worldwide shifting to GPU computing to build energy-efficient infrastructure to support the exponential demand for generative AI; NVIDIA partnering with Tata as they expand their cloud infrastructure service with NVIDIA AI supercomputing to support the exponential demand of generative AI startups and processing of large language models; the impact of AI and machine learning; and Tata Group\u2019s presence across sectors coupled with NVIDIA\u2019s capabilities offering numerous opportunities for collaboration are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners\u2019 products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company\u2019s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All Rights Reserved. NVIDIA, the NVIDIA logo, and NVIDIA Grace Hopper are registered trademarks and/or trademarks of NVIDIA Corporation in the United States and other countries. All other trademarks and copyrights are the property of their respective owners.\n\nRohit Biddappa NVIDIA India rbiddappa@nvidia.com", "document": "VGF0YSA5LzgvMjMucGRm.pdf"}, {"question": "What are the benefits of using Isaac Sim in the cloud?", "gt_answer": "Using Isaac Sim in the cloud allows roboticists to generate large datasets for training AI-based perception models, test software in parallel simulations, and perform compute-intensive tasks like CI/CD and synthetic data generation.", "gt_context": "NVIDIA Robotics Software Jumps to the Cloud, Enabling Collaborative, Accelerated Development of Robots\n\nIsaac Sim on the new Omniverse Cloud platform makes testing and training of virtual robots more widely accessible, offering agility and scalability.\n\nAuthor: Gerard Andrews\n\nRobotics developers can span global teams testing for navigation of environments, underscoring the importance of easy access to simulation software for quick input and iterations.\n\nAt GTC today, NVIDIA founder and CEO Jensen Huang announced that the Isaac Sim robotics simulation platform is now available on the cloud.\n\nDevelopers will have three options to access it. It will soon be available on the new NVIDIA Omniverse Cloud platform, a suite of services that enables developers to design and use metaverse applications from anywhere. It\u2019s available now on AWS RoboMaker , a cloud-based simulation service for robotics development and testing. And, developers can download it from NVIDIA NGC and deploy it to any public cloud.\n\nWith these choices for accessing Isaac Sim in the cloud, individuals and teams can develop, test and train AI-enabled robots at scale and in the workflow that fits their needs. And it comes at a time when the need is greater than ever.\n\nConsider that the mobile robotics market is expected to grow 9x worldwide from $13 billion in 2021 to over $123 billion in 2030, according to ABI Research.\n\n\u201cNVIDIA\u2019s move to provide its visual computing capabilities as an autonomous robot training platform in the cloud should further enable the growing number of companies and developers building next-generation intelligent machines for numerous applications,\u201d said Rob Enderle, principal analyst for the Enderle Group.\n\nUsing Isaac Sim in the cloud, roboticists will be able to generate large datasets from physically accurate sensor simulations to train the AI-based perception models on their robots. The synthetic data generated in these simulations improves the model performance and provides training data that often can\u2019t be collected in the real world.\n\nDevelopers can now test the robot\u2019s software by launching batches of parallel simulations that exercise the software stack in numerous environments and across varying conditions to ensure that the robots perform as designed. Continuous testing and continuous delivery, or CI/CD, of the evolving robotics software stack is an important component of successful robotics deployments.\n\nIsaac Sim in the cloud will make it easy to meet the most compute-intensive simulation tasks like CI/CD and synthetic data generation.\n\nThe upcoming release of Isaac Sim will include NVIDIA cuOpt , a real-time fleet task-assignment and route-planning engine for optimizing robot path planning. Tapping into the accelerated performance of the cloud, teams can make dynamic, data-driven decisions, whether designing the ideal warehouse layout or optimizing active operations.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvbnZpZGlhLWlzYWFjLXNpbS1yb2JvdGljcy1zaW11bGF0aW9uLw==.pdf"}, {"question": "What are the benefits of running Isaac Sim in the cloud?", "gt_answer": "Running Isaac Sim in the cloud allows developers to be location-independent and eliminates the need for a powerful workstation. Simulations can be run on any device, and results can be easily shared with partners, customers, and co-workers.", "gt_context": "Developing robots is a multidisciplinary endeavor. Mechanical engineers, electrical engineers, computer scientists and AI engineers come together to build the robot. With Isaac Sim in the cloud, these teams can be located across the globe and still able to share a virtual world in which to simulate\n\nand train robots.\n\nRunning Isaac Sim in the cloud means that developers will no longer be tied to a powerful workstation to run simulations. Any device will be able to set up, manage and review the results of simulations.\n\nResults can be shared outside of the simulation team with potential partners, customers and co-workers.\n\nRegister free to attend the two-hour hands-on workshop at GTC on using Isaac Sim with AWS RoboMmaker .\n\nAlso, learn more about Isaac Sim features and capabilities in the following GTC sessions:\n\nLeveraging Simulation Tools to Develop AI-Based Robots\n\nHow to Build a Digital Twin: Bringing in Robotics\n\nApply for early access to Isaac Sim on Omniverse Cloud Services .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/20/nvidia-isaac-sim-robotics-simulation/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvbnZpZGlhLWlzYWFjLXNpbS1yb2JvdGljcy1zaW11bGF0aW9uLw==.pdf"}, {"question": "What does Viaduct's TSI engine do?", "gt_answer": "Viaduct's TSI engine handles time-series analytics by aggregating manufacturing, telematics, and service data.", "gt_context": "Quality Control Patrol: Startup Builds Models for Detecting Vehicle Failure Patterns\n\nViaduct is helping vehicle and parts manufacturers reduce warranty claims and defects.\n\nAuthor: Scott Martin\n\nWhen it comes to preserving profit margins, data scientists for vehicle and parts manufacturers are sitting in the driver\u2019s seat.\n\nViaduct, which develops models for time-series inference, is helping enterprises harvest failure insights from the data captured on today\u2019s connected cars. It does so by tapping into sensor data and making correlations.\n\nThe four-year-old startup, based in Menlo Park, Calif., offers a platform to detect anomalous patterns, track issues, and deploy failure predictions. This enables automakers and parts suppliers to get in front of problems with real-time data to reduce warranty claims, recalls and defects, said David Hallac, the founder and CEO of Viaduct.\n\n\u201cViaduct has deployed on more than 2 million vehicles, helped avoid 500,000 hours of downtime and saved hundreds of millions of dollars in warranty costs across the industry,\u201d he said.\n\nThe company relies on NVIDIA A100 Tensor Core GPUs and the NVIDIA Time Series Prediction Platform (TSPP) framework for training, tuning and deploying time-series models, which are used to forecast data.\n\nViaduct has deployed with more than five major manufacturers of passenger cars and commercial trucks, according to the company.\n\n\u201cCustomers see it as a huge savings \u2014 the things that we are affecting are big in terms of profitability,\u201d said Hallac. \u201cIt\u2019s downtime impact, it\u2019s warranty impact and it\u2019s product development inefficiency.\u201d\n\nViaduct is a member of NVIDIA Inception , a program that provides companies with technology support and AI platforms guidance.\n\nHallac\u2019s path to Viaduct began at Stanford University. While he was a Ph.D. student there, Volkswagen came to the lab he was at with sensor data collected from more than 60 drivers over the course of several months and a research grant to explore uses.\n\nThe question the researchers delved into was how to understand the patterns and trends in the sizable body of vehicle data collected over months.\n\nThe Stanford researchers in coordination with Volkswagen Electronics Research Laboratory released a paper on the work, which highlighted Drive2Vec, a deep learning method for embedding sensor data.\n\n\u201cWe developed a bunch of algorithms focused on structural inference from high-dimensional time-series data. We were discovering useful insights, and we were able to help companies train and deploy predictive algorithms at scale,\u201d he said.\n\nViaduct handles time-series analytics with its TSI engine, which aggregates manufacturing, telematics and service data. Its model was trained with A100 GPUs tapping into NVIDIA TSPP.\n\n\u201cWe describe it as a knowledge graph \u2014 we\u2019re building this knowledge graph of all the different sensors and signals and how they correlate with each other,\u201d Hallac said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMTUvdmlhZHVjdC1kZXRlY3RpbmctdmVoaWNsZS1mYWlsdXJlLXBhdHRlcm5zLWExMDAtdHNwcC8=.pdf"}, {"question": "What benefits did the vehicle maker gain from using Viaduct's platform?", "gt_answer": "One vehicle maker using Viaduct\u2019s platform was able to handle some of its issues proactively, fix them and then identify which vehicles were at risk of those issues and only request owners to bring those in for service. This not only affects the warranty claims but also the service desks, which get more visibility into the types of vehicle repairs coming in.", "gt_context": "Several key features are generated using the Drive2Vec autoencoder for embedding sensor data. Correlations are learned via a Markov random field inference process, and the time series predictions\n\ntap into the NVIDIA TSPP framework.\n\nNVIDIA GPUs on this platform enable Viaduct to achieve as much as a 30x better inference accuracy compared with CPU systems running logistics regression and gradient boosting algorithms, Hallac said.\n\nOne vehicle maker using Viaduct\u2019s platform was able to handle some of its issues proactively, fix them and then identify which vehicles were at risk of those issues and only request owners to bring those in for service. This not only affects the warranty claims but also the service desks, which get more visibility into the types of vehicle repairs coming in.\n\nAlso, as vehicle and parts manufacturers are partnered on warranties, the results matter for both.\n\nViaduct reduced warranty costs for one customer by more than $50 million on five issues, according to the startup.\n\n\u201cEveryone wants the information, everyone feels the pain and everyone benefits when the system is optimized,\u201d Hallac said of the potential for cost-savings.\n\nViaduct began working with a major automaker last year to help with quality-control issues. The partnership aimed to improve its time-to-identify and time-to-fix post-production quality issues.\n\nThe automaker\u2019s JD Power IQS (Initial Quality Study) score had been falling while its warranty costs were climbing, and the company sought to reverse the situation. So, the automaker began using Viaduct\u2019s platform and its TSI engine.\n\nIn A/B testing Viaduct\u2019s platform against traditional reactive approaches to quality control, the automaker was able to identify issues on average 53 days earlier during the first year of a vehicle launch. The results saved \u201ctens of millions\u201d in warranty costs and the vehicle\u2019s JD Power quality and reliability score increased \u201cmultiple points\u201d compared with the previous model year, according to Hallac.\n\nAnd Viaduct is getting customer traction that reflects the value of its AI to businesses, he said.\n\nLearn more about NVIDIA A100 and NVIDIA TSPP .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/15/viaduct-detecting-vehicle-failure-patterns-a100-tspp/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMTUvdmlhZHVjdC1kZXRlY3RpbmctdmVoaWNsZS1mYWlsdXJlLXBhdHRlcm5zLWExMDAtdHNwcC8=.pdf"}, {"question": "What does NVIDIA Clara provide a platform for?", "gt_answer": "NVIDIA Clara provides a platform for healthcare work, used by healthcare experts around the world.", "gt_context": "NVIDIA, Oracle CEOs in Fireside Chat Light Pathways to Enterprise AI\n\nAuthor: Rick Merritt\n\nSpeeding adoption of enterprise AI and accelerated computing , Oracle CEO Safra Catz and NVIDIA founder and CEO Jensen Huang discussed their companies\u2019 expanding collaboration in a fireside chat live streamed today from Oracle CloudWorld in Las Vegas.\n\nOracle and NVIDIA announced plans to bring NVIDIA\u2019s full accelerated computing stack to Oracle Cloud Infrastructure (OCI). It includes NVIDIA AI Enterprise , NVIDIA RAPIDS for Apache Spark and NVIDIA Clara for healthcare.\n\nIn addition, OCI will deploy tens of thousands more NVIDIA GPUs to its cloud service, including A100 and upcoming H100 accelerators.\n\n\u201cI\u2019m unbelievably excited to announce our renewed partnership and the expanded capabilities our cloud has,\u201d said Catz to a live and online audience of several thousand customers and developers.\n\n\u201cWe\u2019re thrilled you\u2019re bringing your AI solutions to OCI,\u201d she told Huang.\n\nThe combination of Oracle\u2019s heritage in data and its powerful infrastructure with NVIDIA\u2019s expertise in AI will give users traction facing tough challenges ahead, Huang said.\n\n\u201cIndustries around the world need big benefits from our industry to find ways to do more without needing to spend more or consume more energy,\u201d he said.\n\nAI and GPU-accelerated computing are delivering these benefits at a time when traditional methods of increasing performance are slowing, he added.\n\n\u201cData that you harness to find patterns and relationships can automate the way you work and the products and services you deliver \u2014 the next ten years will be some of the most exciting times in our industry,\u201d Huang said.\n\n\u201cI\u2019m confident all workloads will be accelerated for better performance, to drive costs out and for energy efficiency,\u201d he added.\n\nThe capability of today\u2019s software and hardware, coming to the cloud, \u201cis something we\u2019ve dreamed about since our early days,\u201d said Catz, who joined Oracle in 1999 and has been its CEO since 2014.\n\n\u201cOne of the most critical areas is saving lives,\u201d she added, pointing to the two companies\u2019 work in healthcare.\n\nA revolution in digital biology is transforming healthcare from a science-driven industry to one powered by both science and engineering. And NVIDIA Clara provides a platform for that work, used by healthcare experts around the world, Huang said.\n\n\u201cWe can now use AI to understand the language of proteins and chemicals, all the way to gene screening and quantum chemistry \u2014 amazing breakthroughs are happening now,\u201d he said.\n\nAI promises similar advances for every business. The automotive industry, for example, is becoming a tech industry as it discovers its smartphone moment, he said.\n\n\u201cWe see this all over with big breakthroughs in natural language processing and large language models that can encode human knowledge to apply to all kinds of skills they were never trained to do,\u201d he said.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/10/18/oracle-catz-nvidia-huang/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTgvb3JhY2xlLWNhdHotbnZpZGlhLWh1YW5nLw==.pdf"}, {"question": "What are some of the topics that will be discussed at Jensen Huang's keynote at NVIDIA GTC?", "gt_answer": "The topics that will be discussed at Jensen Huang's keynote at NVIDIA GTC include natural language understanding, the metaverse and the 3D internet, new gaming technology, and advanced AI technologies impacting various industries.", "gt_context": "Get Up to Speed: Five Reasons Not to Miss NVIDIA CEO Jensen Huang\u2019s GTC Keynote Sept. 20\n\nAuthor: Claudia Cook\n\nNatural language understanding, the metaverse and the 3D internet, new gaming technology, and advanced AI technologies impacting industries as varied as transportation, healthcare, finance and entertainment are all coming your way. From advances in robotics to supercomputers and hyperscale data centers, the brightest minds in science, industry and the public sector will discuss the latest breakthroughs at GTC.\n\nNVIDIA founder and CEO Jensen Huang\u2019s keynote at NVIDIA GTC on Tuesday, Sept. 20, is the best way to get ahead of all these trends.\n\nNVIDIA\u2019s virtual technology conference, which takes place Sept. 19-22, sits at the intersections of business and technology, science and the arts in a way no other event can.\n\nThis GTC will focus on neural graphics \u2014 which bring together AI and visual computing to create stunning new possibilities \u2014 the metaverse, an update on large language models , and the changes coming to every industry with the latest generation of recommender systems.\n\nThe free online gathering features speakers from every corner of industry, academia and research.\n\nSpeakers include Johnson & Johnson CTO Rowena Yao; Boeing Vice President Linda Hapgood; Polestar COO Dennis Nobelius; Deutsche Bank CTO Bernd Leukert; UN Assistant Secretary-General Ahunna Eziakonwa; UC San Diego distinguished professor Henrik Christensen, and hundreds more.\n\nFor those who want to get hands on, GTC features developer sessions for newbies and veteran developers.\n\nTwo-hour training labs are included for those who sign up for a free conference pass. Those who want to dig deeper can sign up for one of 21 full-day virtual hands-on workshops at a special price of $149, and for group purchases of more than five seats, we are offering a special of $99 per seat.\n\nFinally, GTC offers networking opportunities that bring together people working on the most challenging problems of our time from all over the planet.\n\nRegister free and start loading up your calendar with content today .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/13/gtc-keynote-2/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMTMvZ3RjLWtleW5vdGUtMi8=.pdf"}, {"question": "What new games are joining the GeForce NOW library?", "gt_answer": "Dragon\u2019s Dogma: Dark Arisen and Jagged Alliance 3.", "gt_context": "Full-Scale Gaming: \u2018Dragon\u2019s Dogma: Dark Arisen\u2019 Comes to GeForce NOW\n\nCapcom\u2019s acclaimed open-world role-playing game and THQ Nordic\u2019s \u2018Jagged Alliance 3\u2019 are now streaming from the cloud.\n\nAuthor: GeForce NOW Community\n\nArise, members! Capcom\u2019s legendary role-playing game Dragon\u2019s Dogma: Dark Arisen joins the GeForce NOW library today.\n\nThe RPG and THQ Nordic\u2019s Jagged Alliance 3 are newly supported on GeForce NOW, playable on nearly any device.\n\nBecome the Arisen and take up the challenge in Capcom\u2019s critically acclaimed RPG. Set in a huge open world, Dragon\u2019s Dogma: Dark Arisen brings players on an epic adventure filled with challenging battles and action.\n\nBut there\u2019s no need to go it alone: Adventure with up to three Pawns. These customizable AI companions fight independently, demonstrating prowess and ability they\u2019ve developed based on traits learned from each player.\n\nPlayers can share their Pawns online and reap rewards of treasures, tips and strategy hints for taking down terrifying enemies. Pawns can also be borrowed when specific skills are needed to complete various challenging quests.\n\nRevisit Gransys or experience Dragon\u2019s Dogma for the first time. Members can play the real Steam version of this RPG classic with support for stunning visuals and high-resolution graphics, even on devices like Macs, mobile devices and smart TVs. Priority members can adventure at up to 1080p 60 frames per second, or upgrade to an Ultimate membership for gameplay at up to 4K 120 fps, longer streaming sessions and RTX ON for supported games.\n\nAnother week means new games.\n\nTHQ Nordic\u2019s tactical RPG Jagged Alliance 3 joins the cloud this week. Chaos reigns when the elected president of Grand Chien \u2014 a nation of rich natural resources and deep political divides \u2014 goes missing and a paramilitary force known as \u201cThe Legion\u201d seizes control of the countryside. Recruit from a large cast of unique mercenaries and make choices to impact the country\u2019s fate.\n\nMembers can look forward to the following this week:\n\nJagged Alliance 3 (New release on Steam , July 14)\n\nDragon\u2019s Dogma: Dark Arisen ( Steam )\n\nOn top of that, in collaboration with EE, the U.K.\u2019s biggest and fastest mobile network, GeForce NOW launched new cloud gaming bundles featuring Priority and Ultimate memberships. To celebrate, check out how streamer Leah \u2018Leahviathan\u2019 Alexandra showcased GeForce NOW in action at the U.K.\u2019s highest-altitude gaming den on the slopes of Ben Nevis, 1,500 feet above sea level in the clouds of the Scottish Highlands.\n\nWhat are you planning to play this weekend? Let us know on Twitter or in the comments below. Who's an NPC you'd want to be friends with IRL? n \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) July 12, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/07/13/geforce-now-thursday-july-13/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktanVseS0xMy8=.pdf"}, {"question": "What companies did NVIDIA partner with to bring new AI capabilities?", "gt_answer": "NVIDIA partnered with Google, Microsoft, Oracle, and a range of leading businesses.", "gt_context": "NVIDIA to Bring AI to Every Industry, CEO Says From AI training to deployment, semiconductors to software libraries, systems to cloud services, NVIDIA CEO Jensen Huang outlined how a new generation of breakthroughs will be put at the world\u2019s fingertips.\n\nAuthor: Brian Caulfield\n\nChatGPT is just the start.\n\nWith computing now advancing at what he called \u201clightspeed,\u201d NVIDIA founder and CEO Jensen Huang today announced a broad set of partnerships with Google, Microsoft, Oracle and a range of leading businesses that bring new AI, simulation and collaboration capabilities to every industry.\n\n\u201cThe warp drive engine is accelerated computing, and the energy source is AI,\u201d Huang said in his keynote at the company\u2019s GTC conference. \u201cThe impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.\u201d\n\nIn a sweeping 78-minute presentation anchoring the four-day event, Huang outlined how NVIDIA and its partners are offering everything from training to deployment for cutting-edge AI services. He announced new semiconductors and software libraries to enable fresh breakthroughs. And Huang revealed a complete set of systems and services for startups and enterprises racing to put these innovations to work on a global scale.\n\nHuang punctuated his talk with vivid examples of this ecosystem at work. He announced NVIDIA and Microsoft will connect hundreds of millions of Microsoft 365 and Azure users to a platform for building and operating hyperrealistic virtual worlds. He offered a peek at how Amazon is using sophisticated simulation capabilities to train new autonomous warehouse robots. He touched on the rise of a new generation of wildly popular generative AI services such as ChatGPT.\n\nAnd underscoring the foundational nature of NVIDIA\u2019s innovations, Huang detailed how, together with ASML, TSMC and Synopsis, NVIDIA computational lithography breakthroughs will help make a new generation of efficient, powerful 2-nm semiconductors possible.\n\nThe arrival of accelerated computing and AI come just in time, with Moore\u2019s Law slowing and industries tackling powerful dynamics \u2014sustainability, generative AI, and digitalization, Huang said. \u201cIndustrial companies are racing to digitalize and reinvent into software-driven tech companies \u2014 to be the disruptor and not the disrupted,\u201d Huang said.\n\nAcceleration lets companies meet these challenges. \u201cAcceleration is the best way to reclaim power and achieve sustainability and Net Zero,\u201d Huang said.\n\nGTC, now in its 14th year, has become one of the world\u2019s most important AI gatherings. This week\u2019s conference features 650 talks from leaders such as Demis Hassabis of DeepMind , Valeri Taylor of Argonne Labs , Scott Belsky of Adobe , Paul Debevec of Netflix , Thomas Schulthess of ETH Zurich and a special fireside chat between Huang and Ilya Sutskever, co-founder of OpenAI, the creator of ChatGPT .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "What is the purpose of the H100 NVL GPU?", "gt_answer": "The H100 NVL GPU is designed for large-language-model inference, such as processing models like the GPT model that powers ChatGPT. It can reduce large language model processing costs by an order of magnitude.", "gt_context": "More than 250,000 registered attendees will dig into sessions on everything from restoring the lost Roman mosaics of 2,000 years ago to building the factories of the future, from exploring the universe with a new generation of massive telescopes to rearranging molecules to accelerate drug discovery , to more than 70 talks on generative AI.\n\nNVIDIA\u2019s technologies are fundamental to AI, with Huang recounting how NVIDIA was there at the very beginning of the generative AI revolution. Back in 2016 he hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer \u2014 the engine behind the large language model breakthrough powering\n\nChatGPT.\n\nLaunched late last year, ChatGPT went mainstream almost instantaneously, attracting over 100 million users, making it the fastest-growing application in history. \u201cWe are at the iPhone moment of AI,\u201d Huang said.\n\nNVIDIA DGX supercomputers, originally used as an AI research instrument, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers.\n\n\u201cDGX supercomputers are modern AI factories,\u201d Huang said.\n\nDeploying LLMs like ChatGPT are a significant new inference workload, Huang said. For large-language-model inference, like ChatGPT, Huang announced a new GPU \u2014 the H100 NVL with dual-GPU NVLink.\n\nBased on NVIDIA\u2019s Hopper architecture, H100 features a Transformer Engine designed to process models such as the GPT model that powers ChatGPT. Compared to HGX A100 for GPT-3 processing, a standard server with four pairs of H100 with dual-GPU NVLink is up to 10x faster.\n\n\u201cH100 can reduce large language model processing costs by an order of magnitude,\u201d Huang said.\n\nMeanwhile, over the past decade, cloud computing has grown 20% annually into a $1 trillion industry, Huang said. NVIDIA designed the Grace CPU for an AI- and cloud-first world, where AI workloads are GPU accelerated. Grace is sampling now , Huang said.\n\nNVIDIA\u2019s new superchip, Grace Hopper, connects the Grace CPU and Hopper GPU over a high-speed 900GB/sec coherent chip-to-chip interface. Grace Hopper is ideal for processing giant datasets like AI databases for recommender systems and large language models, Huang explained.\n\n\u201cCustomers want to build AI databases several orders of magnitude larger,\u201d Huang said. \u201cGrace Hopper is the ideal engine.\u201d\n\nThe latest version of DGX features eight NVIDIA H100 GPUs linked together to work as one giant GPU. \u201cNVIDIA DGX H100 is the blueprint for customers building AI infrastructure worldwide,\u201d Huang said, sharing that NVIDIA DGX H100 is now in full production.\n\nH100 AI supercomputers are already coming online.\n\nOracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs.\n\nAdditionally, Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "Which companies are partnering with NVIDIA to host DGX Cloud infrastructure?", "gt_answer": "NVIDIA is partnering with leading cloud service providers like Oracle Cloud Infrastructure, Microsoft Azure, Google Cloud, and more to host DGX Cloud infrastructure.", "gt_context": "This follows Microsoft Azure\u2019 s private preview announcement last week for its H100 virtual machine, ND H100 v5.\n\nMeta has now deployed its H100-powered Grand Teton AI supercomputer internally for its AI production and research teams.\n\nAnd OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research.\n\nOther partners making H100 available include Cirrascale and CoreWeave , both which announced general availability today. Additionally, Google Cloud, Lambda , Paperspace and Vultr are planning to offer H100.\n\nAnd servers and systems featuring NVIDIA H100 GPUs are available from leading server makers including Atos, Cisco, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.\n\nAnd to speed DGX capabilities to startups and enterprises racing to build new products and develop AI strategies, Huang announced NVIDIA DGX Cloud , through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure to bring NVIDIA DGX AI supercomputers \u201cto every company, from a browser.\u201d\n\nDGX Cloud is optimized to run NVIDIA AI Enterprise , the world\u2019s leading acceleration software suite for end-to-end development and deployment of AI. \u201cDGX Cloud offers customers the best of NVIDIA AI and the best of the world\u2019s leading cloud service providers,\u201d Huang said.\n\nNVIDIA is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud and more.\n\nThis partnership brings NVIDIA\u2019s ecosystem to cloud service providers while amplifying NVIDIA\u2019s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis, ensuring they can quickly and easily scale the development of large, multi-node training workloads.\n\nTo accelerate the work of those seeking to harness generative AI, Huang announced NVIDIA AI Foundations , a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks.\n\nAI Foundations services include NVIDIA NeMo for building custom language text-to-text generative models ; Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content ; and BioNeMo, to help researchers in the $2 trillion drug discovery industry.\n\nAdobe is partnering with NVIDIA to build a set of next-generation AI capabilities for the future of creativity.\n\nGetty Images is collaborating with NVIDIA to train responsible generative text-to-image and text-to-video foundation models.\n\nShutterstock is working with NVIDIA to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "What is NVIDIA Omniverse Cloud?", "gt_answer": "NVIDIA Omniverse Cloud is a fully managed cloud service that provides unprecedented simulation and collaboration capabilities to enterprises.", "gt_context": "And NVIDIA announced Amgen is accelerating drug discovery services with BioNeMo. In addition, Alchemab Therapeutics, AstraZeneca, Evozyne, Innophore and Insilico are all early access users of BioNemo.\n\nBioNeMo helps researchers create, fine-tune and serve custom models with their proprietary data, Huang explained.\n\nHuang also announced that NVIDIA and Medtronic , the world\u2019s largest healthcare technology provider, are partnering to build an AI platform for software-defined medical devices. The partnership will create a common platform for Medtronic systems, ranging from surgical navigation to robotic-assisted surgery.\n\nAnd today Medtronic announced that its GI Genius system, with AI for early detection of colon cancer, is built on NVIDIA Holoscan, a software library for real-time sensor processing systems, and will ship around the end of this year.\n\n\u201cThe world\u2019s $250 billion medical instruments market is being transformed,\u201d Huang said.\n\nTo help companies deploy rapidly emerging generative AI models, Huang announced inference platforms for AI video, image generation, LLM deployment and recommender inference . They combine NVIDIA\u2019s full stack of inference software with the latest NVIDIA Ada, Hopper and Grace Hopper processors \u2014 including the NVIDIA L4 Tensor Core GPU and the NVIDIA H100 NVL GPU , both launched today.\n\n(cid:127) NVIDIA L4 for AI Video can deliver 120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency.\n\n(cid:127) NVIDIA L40 for Image Generation is optimized for graphics and AI-enabled 2D, video and 3D image generation.\n\n(cid:127) NVIDIA H100 NVL for Large Language Model Deployment is ideal for deploying massive LLMs like ChatGPT at scale.\n\n(cid:127) And NVIDIA Grace Hopper for Recommendation Models is ideal for graph recommendation models, vector databases and graph neural networks.\n\nGoogle Cloud is the first cloud service provider to offer L4 to customers with the launch of its new G2 virtual machines, available in private preview today. Google is also integrating L4 into its Vertex AI model store.\n\nUnveiling a second cloud service to speed unprecedented simulation and collaboration capabilities to enterprises, Huang announced NVIDIA is partnering with Microsoft to bring NVIDIA Omniverse Cloud, a fully managed cloud service, to the world\u2019s industries .\n\n\u201cMicrosoft and NVIDIA are bringing Omniverse to hundreds of millions of Microsoft 365 and Azure users,\u201d Huang said, also unveiling new NVIDIA OVX servers and a new generation of workstations powered by NVIDIA RTX Ada Generation GPUs and Intel\u2019s newest CPUs optimized for NVIDIA Omniverse .\n\nTo show the extraordinary capabilities of Omniverse, NVIDIA\u2019s open platform built for 3D design collaboration and digital twin simulation, Huang shared a video showing how NVIDIA Isaac Sim, NVIDIA\u2019s robotics simulation and synthetic generation platform, built on Omniverse, is helping Amazon save time and money with full-fidelity digital twins.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "Which companies are using NVIDIA AI?", "gt_answer": "AT&T uses NVIDIA AI to more efficiently process data and is testing Omniverse ACE and the Tokkio AI avatar workflow to build virtual assistants.", "gt_context": "It shows how Amazon is working to choreograph the movements of Proteus, Amazon\u2019s first fully autonomous warehouse robot, as it moves bins of products from one place to another in Amazon\u2019s cavernous warehouses alongside humans and other robots.\n\nIllustrating the scale of Omniverse\u2019s reach and capabilities, Huang dug into Omniverse\u2019s role in digitalizing the $3 trillion auto industry . By 2030, auto manufacturers will build 300 factories to make 200 million electric vehicles, Huang said, and battery makers are building 100 more megafactories. \u201cDigitalization will enhance the industry\u2019s efficiency, productivity and speed,\u201d Huang said.\n\nTouching on Omniverse\u2019s adoption across the industry, Huang said Lotus is using Omniverse to virtually assemble welding stations. Mercedes-Benz uses Omniverse to build, optimize and plan assembly lines for new models. Rimac and Lucid Motors use Omniverse to build digital stores from actual design data that faithfully represent their cars.\n\nWorking with Idealworks, BMW uses Isaac Sim in Omniverse to generate synthetic data and scenarios to train factory robots. And BMW is using Omniverse to plan operations across factories worldwide and is building a new electric-vehicle factory, completely in Omniverse, two years before the plant opens, Huang said.\n\nSeparately. NVIDIA today announced that BYD, the world\u2019s leading manufacturer of new energy vehicles NEVs, will extend its use of the NVIDIA DRIVE Orin centralized compute platform in a broader range of its NEVs.\n\nEnabling semiconductor leaders such as ASML, TSMC and Synopsis to accelerate the design and manufacture of a new generation of chips as current production processes near the limits of what physics makes possible, Huang announced NVIDIA cuLitho , a breakthrough that brings accelerated computing to the field of computational lithography.\n\nThe new NVIDIA cuLitho software library for computational lithography is being integrated by TSMC, the world\u2019s leading foundry, as well as electronic design automation leader Synopsys into their software, manufacturing processes and systems for the latest-generation NVIDIA Hopper architecture GPUs.\n\nChip-making equipment provider ASML is working closely with NVIDIA on GPUs and cuLitho, and plans to integrate support for GPUs into all of their computational lithography software products. With lithography at the limits of physics, NVIDIA\u2019s introduction of cuLitho enables the industry to go to 2nm and beyond, Huang said.\n\n\u201cThe chip industry is the foundation of nearly every industry,\u201d Huang said.\n\nCompanies around the world are on board with Huang\u2019s vision.\n\nTelecom giant AT&T; uses NVIDIA AI to more efficiently process data and is testing Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "Which companies are using NVIDIA Triton?", "gt_answer": "American Express, the U.S. Postal Service, Microsoft Office and Teams, and Amazon are among the 40,000 customers using NVIDIA Triton.", "gt_context": "American Express, the U.S. Postal Service, Microsoft Office and Teams, and Amazon are among the 40,000 customers using the high-performance NVIDIA TensorRT inference optimizer and runtime, and NVIDIA Triton, a multi-framework data center inference serving software.\n\nUber uses Triton to serve hundreds of thousands of ETA predictions per second.\n\nAnd with over 60 million daily users, Roblox uses Triton to serve models for game recommendations, build avatars, and moderate content and marketplace ads.\n\nMicrosoft, Tencent and Baidu are all adopting NVIDIA CV-CUDA for AI computer vision. The technology, in open beta, optimizes pre- and post-processing, delivering 4x savings in cost and energy.\n\nWrapping up his talk, Huang thanked NVIDIA\u2019s systems, cloud and software partners, as well as researchers, scientists and employees.\n\nNVIDIA has updated 100 acceleration libraries, including cuQuantum and the newly open-sourced CUDA Quantum for quantum computing, cuOpt for combinatorial optimization, and cuLitho for computational lithography, Huang announced.\n\nThe global NVIDIA ecosystem, Huang reported, now spans 4 million developers, 40,000 companies and 14,000 startups in NVIDIA Inception.\n\n\u201cTogether,\u201d Huang said. \u201cWe are helping the world do the impossible.\u201d\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/21/gtc-keynote-spring-2023/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjEvZ3RjLWtleW5vdGUtc3ByaW5nLTIwMjMv.pdf"}, {"question": "What is the significance of neural graphics in the metaverse?", "gt_answer": "Neural graphics, the unification of AI and graphics, will make metaverse content creation available to everyone. It enhances results, automates design choices, and unlocks new opportunities for creativity.", "gt_context": "Dive Into AI, Avatars and the Metaverse With NVIDIA at SIGGRAPH\n\nAuthor: Greg Estes\n\nInnovative technologies in AI, virtual worlds and digital humans are shaping the future of design and content creation across every industry. Experience the latest advances from NVIDIA in all these areas at SIGGRAPH , the world\u2019s largest gathering of computer graphics experts, running Aug. 8-11.\n\nAt the conference, creators, developers, engineers, researchers and students will see all the new tech and research that enables them to elevate immersive storytelling, build realistic avatars and create stunning 3D virtual worlds.\n\nNVIDIA\u2019s special address on Tuesday, Aug. 9, at 9 a.m. PT will feature founder and CEO Jensen Huang, along with other senior leaders. Join to get an exclusive look at some of our most exciting work, from award-winning research to new AI-powered tools and solutions.\n\nDiscover the emergence of the metaverse, and see how users can build 3D content and connect photorealistic virtual worlds with NVIDIA Omniverse , a computing platform for 3D design collaboration and true-to-reality world simulation. See the advanced solutions that are powering these 3D worlds, and how they expand the realm of artistic expression and creativity.\n\nNVIDIA is also presenting over 20 in-person sessions at SIGGRAPH , including hands-on labs and research presentations. Explore the session topics below to build your calendar for the event:\n\nSee how users can create assets and build virtual worlds for the metaverse using the power and versatility of Universal Scene Description (USD) with this presentation:\n\nThe Next Evolution of USD for Building Virtual Worlds . Learn about the importance of USD at this session on Wednesday, Aug. 10, at 1 p.m. PT. Get a look at a USD development roadmap from NVIDIA, and learn more about its recent USD projects and initiatives.\n\nFind out how to accelerate complex 3D workflows and content creation for the metaverse. Discover groundbreaking ways to visualize, simulate and code with advanced solutions like NVIDIA Omniverse in sessions including:\n\nReal-Time Collaboration in Ray-Traced VR . Discover the recent leaps in hardware architecture and graphics software that have made ray tracing at virtual-reality frame rates possible at this session on Monday, Aug. 8, at 5 p.m. PT.\n\nMaterial Workflows in Omniverse . Learn how to improve graphics workflows with arbitrary material shading systems supported in Omniverse at this talk on Thursday, Aug. 11, at 9 a.m. PT.\n\nLearn more about neural graphics \u2014 the unification of AI and graphics \u2014 which will make metaverse content creation available to everyone. From 3D assets to animation, see how AI integration can enhance results, automate design choices and unlock new opportunities for creativity in the metaverse. Check out the session below:", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDQvYWktYXZhdGFycy1tZXRhdmVyc2Utc2lnZ3JhcGgv.pdf"}, {"question": "When and where will the presentation on Instant NeRF be held?", "gt_answer": "The presentation on Instant NeRF will be held on Tuesday, Aug. 9, at 3 p.m. PT.", "gt_context": "NVIDIA Instant NeRF \u2013 Getting Started with Neural Radiance Fields . Researchers will discuss how Instant NeRF can enhance 2D-to-3D visualization-development projects at this presentation on Tuesday, Aug. 9, at 3 p.m. PT.\n\nGet insights on the latest technologies transforming industries, from cloud production to extended reality. Discover how leading film studios, cutting-edge startups and other graphics companies are building and supporting their technologies with NVIDIA solutions. Some must-see sessions include:\n\nILM Presents: Leveraging AI in Visual Effects and StageCraft Virtual Production . Hear how an Industrial Light & Magic team uses NVIDIA\u2019s powerful AI-enabled DeepSearch tool to search through its massive asset library. This presentation takes place on Monday, Aug. 8, at 4 p.m. PT.\n\nThe Future of Extended Reality: How Immersion Will Change Everything. Industry luminaries will discuss the technologies that are impacting the future of extended reality in this panel, which takes place on Tuesday, Aug. 9, at 10 a.m. PT.\n\nMetaphysic: Creating Hyperreal Avatars and Synthetic Humans for Web3 and Feature Films . Jo Plaete, a world leader in creating hyperreal AI-generated content, will showcase some of the latest work undertaken by AI company Metaphysic in this presentation on Thursday, Aug. 11, at 1 p.m. PT.\n\nSIGGRAPH registration is required to attend the in-person events. Sessions will also be available the following day to watch on demand from our site.\n\nMany NVIDIA partners will attend SIGGRAPH, showcasing demos and presenting on topics such as AI and virtual worlds. Download this event map to learn more.\n\nAnd tune into the global premiere of The Art of Collaboration: NVIDIA, Omniverse and GTC on Wednesday, Aug. 10, at 10 a.m. PT. The documentary shares the story of the engineers, artists and researchers who pushed the limits of NVIDIA GPUs, AI and Omniverse to deliver the stunning GTC keynote last spring.\n\nJoin NVIDIA at SIGGRAPH to learn more, and watch NVIDIA\u2019s special address to hear the latest on graphics, AI and virtual worlds.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/08/04/ai-avatars-metaverse-siggraph/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDQvYWktYXZhdGFycy1tZXRhdmVyc2Utc2lnZ3JhcGgv.pdf"}, {"question": "What solutions is NVIDIA showcasing at Gartner IT IOCS?", "gt_answer": "NVIDIA is showcasing their Base Command Platform, AI Enterprise, and DGX systems at Gartner IT IOCS.", "gt_context": "NVIDIA Showcases Expertise and Solutions to Speed Returns on AI Investments, at Gartner IT IOCS\n\nDiscover ways to accelerate the return on investment of AI and achieve goals sooner.\n\nAuthor: Carolyne Van Den Hoogen\n\nMany enterprises have increased their investments in AI to drive business transformation, but most still find it difficult to deliver value in production-ready applications.\n\nAt Gartner IT IOCS, NVIDIA is sharing its strategies for enterprise IT, including infrastructure, tools and expertise that can help businesses bring AI models into production sooner, reducing wasted resources and achieving their goals sooner.\n\nIT leaders will learn how to use the NVIDIA AI platform to implement better hybrid-cloud infrastructure that\u2019s optimized for the unique demands of AI development, leverage tools that make it easier for IT managers to streamline AI model pipelines and unleash developer productivity.\n\nJoin us for our session aimed at solution providers, \u201cMaking IT the Hero of AI: Selecting the Right Platform for Innovation in 2023,\u201d (SPS049) on Thursday, Dec. 8, at 11:45 a.m. PT. ( Register here .) Charlie Boyle, vice president of DGX Systems at NVIDIA, will share insights and examples from customer use cases that can help IT leaders on their journey to building an AI-infused enterprise.\n\nAttend Gartner IT IOCS 2022 and drop by NVIDIA booth 111 , where you\u2019ll learn from our AI experts how:\n\nNVIDIA Base Command Platform provides a centralized, single-pane view of AI model development with monitoring and dashboards to accelerate AI initiatives. NVIDIA\u2019s own data scientists created the platform to help speed their development of AI initiatives. Base Command Platform unifies multiple teams around the globe to manage AI model development.\n\nNVIDIA AI Enterprise offers a cloud-native suite of AI and data analytics software optimized for the development and deployment of AI. The software is an extensive library of application workflows that streamlines AI development and deployment of enterprise AI solutions using conversational AI, vision AI, cybersecurity and more. These workflows save enterprises from weeks to months of grunt work and help realize gains faster.\n\nNVIDIA DGX systems are setting the bar for enterprise AI infrastructure.\n\nPurpose-built to meet the demands of enterprise AI and data science, DGX systems deliver the fastest start in AI development, effortless productivity and revolutionary performance \u2014 for insights in hours instead of months.\n\nTry out NVIDIA Base Command Platform with LaunchPad Labs by applying for the AI Center of Excellence lab . And sign up to try out NVIDIA AI Enterprise with LaunchPad Labs.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/12/01/gartner-it-iocs/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMDEvZ2FydG5lci1pdC1pb2NzLw==.pdf"}, {"question": "What is NVIDIA Omniverse?", "gt_answer": "NVIDIA Omniverse is a real-time design collaboration and simulation platform that connects virtual worlds and is available to NVIDIA Studio creators.", "gt_context": "Groundbreaking Updates to NVIDIA Studio Power the 3D Virtual Worlds of Tomorrow, Today Major advancements for creators announced during CES special address include Studio Laptops with new GeForce RTX 3080 Ti GPUs, NVIDIA Omniverse general availability and NVIDIA Canvas update.\n\nAuthor: Stanley Tack\n\nWe\u2019re at the dawn of the next digital frontier. Creativity is fueling new developments in design, innovation and virtual worlds.\n\nFor the creators driving this future, we\u2019ve built NVIDIA Studio , a fully accelerated platform with high-performance GPUs as the heartbeat for laptops and desktops.\n\nThis hardware is paired with exclusive NVIDIA RTX-accelerated software optimizations in top creative apps and a suite of tools like NVIDIA Omniverse, Canvas and Broadcast, which help creators enhance their workflows.\n\nAnd it\u2019s all supported by specialized drivers that are updated monthly for performance and reliability \u2014 like the January Studio Driver, available starting today.\n\nThe interconnected 3D virtual worlds of tomorrow are being built today. NVIDIA Omniverse , designed to be the foundation that connects these virtual worlds, is now available to millions of NVIDIA Studio creators using GeForce RTX and NVIDIA RTX GPUs.\n\nWe\u2019ve also introduced GeForce RTX 3080 Ti and 3070 Ti-based Studio laptops, groundbreaking hardware with heightened levels of performance \u2014 especially on battery.\n\nUpdated with a major increase in fidelity, new materials and an upgraded AI model, NVIDIA Canvas enables artists to turn simple brushstrokes into realistic landscape images by using AI.\n\nExpanding the NVIDIA Studio ecosystem, NVIDIA Omniverse is now available at no cost to millions of individual creators with GeForce RTX and NVIDIA RTX GPUs.\n\nBolstered by new features and tools, NVIDIA\u2019s real-time design collaboration and simulation platform empowers artists, designers and creators to connect and collaborate in leading 3D design applications from their RTX-powered laptop, desktop or workstation.\n\nMulti-app workflows can grind to a halt with near-constant exporting and importing. With Omniverse, creators can connect their favorite 3D design tools to a single scene and simultaneously create and edit between the apps.\n\nWe\u2019ve also announced platform developments for Omniverse Machinima with new free game characters, objects and environments from Mechwarrior 5 , Shadow Warrior 3 , Squad and Mount & Blade II: Bannerlord ; and Omniverse Audio2Face with new blendshape support and direct export to Epic\u2019s MetaHuman ; plus early access to new platform features like Omniverse Nucleus Cloud \u2014 enabling simple \u201cone-click-to-collaborate\u201d sharing of large Omniverse 3D scenes.\n\nLearn more about Omniverse, the latest enhancements and its general availability, and download the latest version at nvidia.com/omniverse .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDEvMDQvc3R1ZGlvLWxhcHRvcHMtb21uaXZlcnNlLWNhbnZhcy8=.pdf"}, {"question": "What are the new features of the GeForce RTX 3080 Ti Laptop GPU?", "gt_answer": "The new GeForce RTX 3080 Ti Laptop GPU features 16GB of the fastest GDDR6 memory ever shipped in a laptop and higher performance than the desktop TITAN RTX.", "gt_context": "NVIDIA Studio laptops provide the best mobile performance for 3D creation. The new GeForce RTX 3080 Ti Laptop GPU features 16GB of the fastest GDDR6 memory ever shipped in a laptop and higher performance than the desktop TITAN RTX.\n\nThe new GeForce RTX 3070 Ti also delivers fantastic performance \u2014 it\u2019s up to 70 percent faster than RTX 2070 SUPER laptops.\n\nNext-generation laptop technologies are amping up performance. We\u2019ve worked with CPU vendors on CPU Optimizer. It\u2019s a new, low-level framework enabling the GPU to further optimize performance, temperature and power of next-gen CPUs. As a result, CPU efficiency is improved and power is transferred to the GPU for more performance in creative applications.\n\nIn compute-heavy apps like Adobe Premiere, Blender and Matlab, we\u2019ve developed Rapid Core Scaling. It enables the GPU to sense the real-time demands of applications and use only the cores it needs rather than all of them. This frees up power that can be used to run the active cores at higher frequencies, delivering up to 3x more performance for intensive creative work on the go.\n\nASUS, MSI and Razer are launching new laptops with a wide range of designs \u2014 and up to GeForce RTX 3080 Ti GPUs \u2014 starting in February.\n\nBolstered by work from the NVIDIA Research team developing GauGAN2 , NVIDIA Canvas is now available with 4 times higher resolution and five new materials.\n\nThe GauGAN2 AI model incorporated in the latest update helps deliver more realistic images with greater definition and fewer artifacts.\n\nFive new materials \u2014 straw, flowers, mud, dirt and bush \u2014 liven up and create richer landscape environments.\n\nRead more about the latest NVIDIA Canvas update .\n\nCreators can download the January Studio Driver , available now with improved performance and reliability for the Omniverse and Canvas updates.\n\nWith monthly updates, NVIDIA Studio Drivers deliver smooth performance on creative applications and the best possible experience when using NVIDIA GPUs. Extensive multi-app workflow testing ensures the latest apps run smoothly.\n\nFinally, the GeForce RTX 3050 GPU brings even more choice for creators. Our new entry-level GPU provides the most accessible way of getting great RTX benefits \u2014 real-time ray tracing, AI, a top-notch video encoder and video acceleration. Starting at just $279, it\u2019s a great way to start creating with RTX. Look for availability from partners worldwide on Jan. 27.\n\nOne more thing: Keep an eye out for more information on GeForce RTX 3090 Ti later this month. It\u2019ll have a huge 24GB of lightning-fast video memory, making it perfect for conquering nearly any creative task.\n\nSubscribe to the Studio YouTube channel for tutorials, tips and tricks by industry-leading artists, and stay up to date on all things Studio by signing up for the NVIDIA Studio newsletter .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/01/04/studio-laptops-omniverse-canvas/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDEvMDQvc3R1ZGlvLWxhcHRvcHMtb21uaXZlcnNlLWNhbnZhcy8=.pdf"}, {"question": "What companies are partnering with NVIDIA to further develop USD?", "gt_answer": "Pixar, Adobe, Autodesk, Siemens, and other leading companies are partnering with NVIDIA to further develop USD.", "gt_context": "NVIDIA and Partners Build Out Universal Scene Description to Accelerate Industrial Metaverse and Next Wave of AI\n\nEffort to Further USD as Foundation of Open Metaverse and 3D Internet Led by Pixar, Adobe, Autodesk, Siemens, Plus Innovators in Media, Gaming, Robotics, Industrial Automation and Retail; NVIDIA Announces Open-Source USD Resources and Test Suite\n\nNVIDIA today announced a broad initiative to evolve Universal Scene Description (USD), the open-source and extensible language of 3D worlds, to become a foundation of the open metaverse and 3D internet.\n\nWorking together with USD\u2019s inventor, Pixar, as well as Adobe, Autodesk, Siemens and a host of other leading companies, NVIDIA will pursue a multi-year roadmap to expand USD\u2019s capabilities beyond visual effects \u2014 enabling it to better support industrial metaverse applications in architecture, engineering, manufacturing, scientific computing, robotics and industrial digital twins.\n\nAt its SIGGRAPH special address, the company shared forthcoming updates to evolve USD. These include international character support, which will allow users from all countries and languages to participate in USD. Support for geospatial coordinates will enable city-scale and planetary-scale digital twins. And real-time streaming of IoT data will enable the development of digital twins that are synchronized to the physical world.\n\nTo accelerate USD development and adoption, the company also announced development of an open USD Compatibility Testing and Certification Suite that developers can freely use to test their USD builds and certify that they produce an expected result.\n\n\u201cBeyond media and entertainment, USD will give 3D artists, designers, developers and others the ability to work collaboratively across diverse workflows and applications as they build virtual worlds,\u201d said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. \u201cWorking with our community of partners, we\u2019re investing in USD so that it can serve as the foundation for architecture, manufacturing, robotics, engineering and many more domains.\u201d\n\nOpen-Source USD Resources and Leaders Supporting USD NVIDIA is releasing a collection of free resources to speed USD adoption, including thousands of USD assets purpose-built to open up virtual-world building for users without 3D expertise. The company is also providing hundreds of on-demand tutorials, documentation and developer tools to help spread USD education.\n\n\u201cUSD is a cornerstone of Pixar\u2019s pipeline, and it\u2019s seeing rapidly growing momentum as an open-source framework across not only VFX and animation, but now industrial, design and scientific applications,\u201d said Steve May, chief technology officer at Pixar Animation Studios. \u201cNVIDIA\u2019s contributions to help evolve USD as the open foundation of fully interoperable 3D platforms will be a great benefit across industries.\u201d", "document": "TlZJRElBIGFuZCBQYXJ0bmVycyBVU0QgOC85LzIyLnBkZg==.pdf"}, {"question": "Which popular 3D software ecosystems will have USD plugins available in NVIDIA Omniverse?", "gt_answer": "The popular 3D software ecosystems that will have USD plugins available in NVIDIA Omniverse are PTC Creo, SideFX Houdini, Autodesk Alias, Autodesk Civil3D, and Siemens Xcelerator.", "gt_context": "NVIDIA also announced investment in building USD plugins from popular 3D software ecosystems to NVIDIA Omniverse\u2122, a platform for connecting and creating virtual worlds based on Universal Scene Description. New beta releases include PTC Creo and SideFX Houdini, with Autodesk Alias and Autodesk Civil3D, Siemens Xcelerator and more in development.\n\n\u201cSiemens and NVIDIA are coming together to enable the industrial metaverse where the future of design, engineering and collaboration will occur,\u201d said Dirk Didascalou, chief technology officer of Siemens Digital Industries. \u201cWe are excited to support USD in the Siemens Xcelerator platform and plan to collaborate with NVIDIA on the next generation of the format.\u201d\n\nAt SIGGRAPH, NVIDIA is also bringing together hundreds of engineering and product leads across the USD ecosystem into working councils to help align on USD development priorities and get feedback on where NVIDIA can centralize development efforts. Among the many companies contributing to and supporting USD are Adobe, Autodesk, Pixar and Siemens.\n\n\u201cAutodesk has been closely involved in the development of USD from its early inception as a means of standardizing the exchange of 3D data in animation and visual effects workflows,\u201d said Raji Arasu, executive vice president and chief technology officer at Autodesk. \u201cWe have long understood the importance of 3D interoperability and have already begun extending USD\u2019s applications beyond media and entertainment to design, engineering and industrial applications. We are excited by the momentum behind USD from partners like NVIDIA, which we believe will help better realize the concept of the metaverse and all the workflows it unlocks for our customers.\u201d\n\nInnovators in media, gaming, robotics, industrial automation, retail and grocery are already adopting USD as their metaverse\n\nlanguage of choice, including Kroger and Volvo Cars.\n\n\u201cThe promise of USD is immense. At Volvo, we immediately understood the value of the open, extensible, interoperable 3D scene description for our metaverse projects. Being able to maintain assets as a single source of truth and bring them from virtual world to virtual world will be seamless in 3D internet consumer applications,\u201d said Mattias Wikenmalm, senior expert of visualization at Volvo Cars.\n\nLearn more about NVIDIA\u2019s USD resources.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics and ignited the era of modern AI. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.", "document": "TlZJRElBIGFuZCBQYXJ0bmVycyBVU0QgOC85LzIyLnBkZg==.pdf"}, {"question": "What are the important factors that could cause actual results to differ materially?", "gt_answer": "Important factors that could cause actual results to differ materially include: global economic conditions; reliance on third parties for manufacturing; impact of technological development and competition; market acceptance of products; design, manufacturing, or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance when integrated into systems.", "gt_context": "Certain statements in this press release including, but not limited to, statements as to: the multi-year roadmap to expand USD\u2019s capabilities beyond visual effects; our collaborations with third parties; the impact of evolving USD; international character support allowing users from all countries and languages to participate in USD; support for geospatial coordinates enabling city-scale and planetary-scale digital twins; real-time streaming of IoT data enabling the development of digital twins that are synchronized to the physical world; the rapidly growing momentum of USD as an open-source framework across VFX, animation, industrial, design and scientific applications; the benefits, performance and impact of our products and technologies, including Omniverse; the future of design, engineering and collaboration occurring in the industrial metaverse; and the promise of USD are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners\u2019 products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company\u2019s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2022 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.\n\nKasia Johnston +1-415-813-8859 kasiaj@nvidia.com", "document": "TlZJRElBIGFuZCBQYXJ0bmVycyBVU0QgOC85LzIyLnBkZg==.pdf"}, {"question": "What is the new reward available for GeForce NOW Premium members?", "gt_answer": "The new reward available for GeForce NOW Premium members is Captain Marvel\u2019s Medieval Marvel suit.", "gt_context": "April Showers Bring 23 New GeForce NOW Games, Including \u2018Have a Nice Death\u2019\n\nNew \u2018Marvel\u2019s Midnight Suns\u2019 reward now available along with 11 new games this week, plus an update on the GeForce NOW partnership with Microsoft.\n\nAuthor: GeForce NOW Community\n\nIt\u2019s another rewarding GFN Thursday, with 23 new games for April on top of 11 joining the cloud this week and a new Marvel\u2019s Midnight Suns reward now available first for GeForce NOW Premium members.\n\nNewark, N.J., is next to complete its upgrade to RTX 4080 SuperPODs , making it the 12th region worldwide to bring new performance to Ultimate members.\n\nGeForce NOW on SHIELD TV is being updated for a more consistent experience across Android and TV devices. Update 6.00 has begun rolling out to SHIELD TV owners this week.\n\nPlus, work is underway to bring the initial batch of Xbox first-party games and features to GeForce NOW.\n\nLast month, we announced a partnership with Microsoft to bring Xbox Game Studios PC games to the GeForce NOW library, including titles from Bethesda, Mojang Studios and Activision, pending closure of Microsoft\u2019s acquisition. It\u2019s a shared commitment to giving gamers more choice and enabling PC gamers to play their favorite games anywhere.\n\nSince then the teams at both companies have been collaborating on delivering a best-in-class cloud gaming experience that PC gamers have come to expect, delivering a seamless experience across any device, whether playing locally or in the cloud.\n\nWe\u2019re making progress, and in future GFN Thursdays we will provide an update on onboarding of individual titles from Microsoft\u2019s incredibly rich catalog of first-party PC games. Stay tuned to GFN Thursday updates for the latest.\n\nStarting today, Premium GeForce NOW members can claim their marvel-ous new reward. Marvel\u2019s Midnight Suns, the tactical role-playing game from the creators of XCOM , has been praised for its immersive game play and cutting-edge visuals with support for DLSS 3 technology on top of RTX-powered ray tracing .\n\nWith the game\u2019s first downloadable content, called The Good, The Bad, and The Undead , fans were thrilled to welcome Deadpool to the roster . This week, members can get their free reward to secure Captain Marvel\u2019s Medieval Marvel suit.\n\nUltimate and Priority members can visit the GeForce NOW Rewards portal today and update the settings to start receiving special offers and in-game goodies. Better hurry, as this reward is available on a first-come, first-served basis only through Saturday, May 6.\n\nNo joke, kick the weekend off right by streaming Have a Nice Death . Restore order in this darkly charming 2D action game from Gearbox while playing as an overworked Death whose employees at Death Inc. have run rampant as caretakers of souls. Hack and slash through numerous minions and bosses in each department at the company, using unique weapons and spells.\n\nThis leads the 11 new games joining the cloud this week:\n\n9 Years of Shadows (New release on Steam )", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMzAvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMzAv.pdf"}, {"question": "What games were released on Steam in March?", "gt_answer": "Some of the games released on Steam in March include 9 Years of Shadows, Gripper, Ravenbound, The Great War: Western Front, Troublemaker, Meet Your Maker, Road 96: Mile 0, TerraScape, Curse of the Sea Rats, and many more.", "gt_context": "9 Years of Shadows (New release on Steam )\n\nTerra Nil (New release on Steam , March 28)\n\nGripper (New release on Steam , March 29)\n\nSmalland: Survive the Wilds (New release on Steam , March 29)\n\nDREDGE (New release on Steam , March 30)\n\nRavenbound (New release on Steam , March 30)\n\nThe Great War: Western Front (New release on Steam , March 30)\n\nTroublemaker (New release on Steam , March 31)\n\nHave a Nice Death ( Steam )\n\nTower of Fantasy ( Steam )\n\nTunche (Free on Epic Games Store )\n\nPlus, look forward to the rest of April:\n\nMeet Your Maker (New release on Steam , April 4)\n\nRoad 96: Mile 0 (New release on Steam , April 4)\n\nTerraScape (New release on Steam , April 5)\n\nCurse of the Sea Rats (New release on Steam , April 6)\n\nRavenswatch (New release on Steam , April 6)\n\nSupplice (New release on Steam , April 6)\n\nDE-EXIT \u2013 Eternal Matters (New release on Steam , April 14)\n\nSurvival: Fountain of Youth (New release on Steam , April 19)\n\nTin Hearts (New release on Steam , April 20)\n\nDead Island 2 (New Release on Epic Games Store , April 21)\n\nAfterimage (New release on Steam , April 25)\n\nRoots of Pacha (New release on Steam , April 25)\n\nBramble: The Mountain King (New release on Steam , April 27)\n\n11-11 Memories Retold ( Steam )\n\ncanVERSE ( Steam )\n\nTeardown ( Steam )\n\nGet Even ( Steam )\n\nLittle Nightmares ( Steam )\n\nLittle Nightmares II ( Steam )\n\nThe Dark Pictures Anthology: Man of Medan ( Steam )\n\nThe Dark Pictures Anthology: Little Hope ( Steam )\n\nThe Dark Pictures Anthology: House of Ashes ( Steam )\n\nThe Dark Pictures Anthology: The Devil in Me ( Steam )\n\nOn top of the 19 games announced in March, nine extra ones joined the GeForce NOW library this month, including this week\u2019s additions 9 Years of Shadows , Terra Nil , Gripper, Troublemaker, Have a\n\nNice Death, Tunche, as well as:\n\nCall of the Sea ( Epic Games Store , March 9)\n\nGRID Legends ( Steam and EA )\n\nTchia (New release on Epic Games Store )\n\nSystem Shock didn\u2019t make it in March due to a shift in its release date, nor did Chess Ultra due to a technical issue.\n\nWith so many titles streaming from the cloud, what game will you play next? Let us know in the comments below, on Twitter or on Facebook . What's next on your list of games to try?nn \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) March 29, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/30/geforce-now-thursday-march-30/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMzAvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMzAv.pdf"}, {"question": "How can I participate in the Fortnite closed beta for mobile?", "gt_answer": "GeForce NOW members can sign up for a chance to join the Fortnite limited-time closed beta for mobile devices. Non-members can register for a GeForce NOW membership and sign up to become eligible for the closed beta once it starts rolling out next week.", "gt_context": "GFN Thursday: \u2018Fortnite\u2019 Comes to iOS Safari and Android Through NVIDIA GeForce NOW via Closed Beta \u2018Fortnite\u2019 starts streaming to mobile devices in a limited time closed beta via iOS Safari and the NVIDIA GeForce NOW Android app.\n\nAuthor: GeForce NOW Community\n\nStarting next week, Fortnite on GeForce NOW will launch in a limited-time closed beta for mobile, all streamed through the Safari web browser on iOS and the GeForce NOW Android app .\n\nThe beta is open for registration for all GeForce NOW members, and will help test our server capacity, graphics delivery and new touch controls performance. Members will be admitted to the beta in batches over the coming weeks.\n\nAlongside the amazing team at Epic Games, we\u2019ve been working to enable a touch-friendly version of Fortnite for mobile delivered through the cloud. While PC games in the GeForce NOW library are best experienced on mobile with a gamepad, the introduction of touch controls built by the GeForce NOW team offers more options for players, starting with Fortnite .\n\nBeginning today, GeForce NOW members can sign up for a chance to join the Fortnite limited-time closed beta for mobile devices. Not an existing member? No worries. Register for a GeForce NOW membership and sign up to become eligible for the closed beta once the experience starts rolling out next week. Upgrade to a Priority or RTX 3080 membership to receive priority access to gaming servers. A paid GeForce NOW membership is not required to participate.\n\nFor tips on gameplay mechanics or a refresher on playing Fortnite with touch controls, check out Fortnite\u2019s Getting Started page.\n\nAnd we\u2019re just getting started. Cloud-to-mobile gaming is a great opportunity for publishers to get their games into more gamers\u2019 hands with touch-friendly versions of their games. PC games or game engines, like Unreal Engine 4, which support Windows touch events can easily enable mobile touch support on GeForce NOW.\n\nWe\u2019re working with additional publishers to add more touch-enabled games to GeForce NOW. And look forward to more publishers streaming full PC versions of their games to mobile devices with built-in touch support \u2014 reaching millions through the Android app and iOS Safari devices.\n\nGFN Thursday always means more games. Members can find these and more streaming on the cloud this week:\n\nThe Anacrusis (New release on Steam and Epic Games Store , Jan. 13)\n\nSupraland Six Inches Under (New release on Steam , Jan. 14)\n\nGalactic Civilizations 3 (Free on Epic Games Store , Jan. 13 \u2013 20)\n\nReady or Not ( Steam )\n\nWe make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.\n\nWhat are you planning to play this weekend? Let us know on Twitter or in the comments below.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/01/13/geforce-now-fortnite-closed-beta/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDEvMTMvZ2Vmb3JjZS1ub3ctZm9ydG5pdGUtY2xvc2VkLWJldGEv.pdf"}, {"question": "What software did ManvsMachine use to create the romanesco broccoli animation?", "gt_answer": "ManvsMachine used SideFX's Houdini software.", "gt_context": "Flawless Fractal Food Featured This Week \u2018In the NVIDIA Studio\u2019\n\nManvsMachine recreates nature through coding and modeling with GeForce RTX 3090 GPUs.\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.\n\nManvsMachine steps In the NVIDIA Studio this week to share insights behind fractal art \u2014 which uses algorithms to artistically represent calculations \u2014 derived from geometric objects as digital images and animations.\n\nFounded in London in 2007, ManvsMachine is a multidimensional creative company specializing in design, film and visual arts.\n\nManvsMachine works closely with the world\u2019s leading brands and agencies, including Volvo, Adidas, Nike and more, to produce award-winning creative content.\n\nThe team at ManvsMachine finds inspiration from a host of places: nature and wildlife, conversations, films, documentaries, as well as new and historic artists of all mediums.\n\nFor fans of romanesco broccoli, the edible flower bud resembling cauliflower in texture and broccoli in taste might conjure mild, nutty, sweet notes that lend well to savory pairings. For ManvsMachine, it presented an artistic opportunity.\n\nThe Roving Romanesco animation started out as a series of explorations based on romanesco broccoli, a prime example of a fractal found in nature.\n\nManvsMachine\u2019s goal was to find an efficient way of recreating it in 3D and generate complex geometry using a simple setup.\n\nThe genesis of the animation revolved around creating a phyllotaxis pattern, an arrangement of leaves on a plant stem, using the high-performance expression language VEX in SideFX\u2019s Houdini software.\n\nThis was achieved by creating numerous points and offsetting each from the previous one by 137.5 degrees, known as the golden or \u201cperfect circular\u201d angle, while moving outward from the center. The built-in RTX-accelerated Karma XPU renderer enabled fast simulation models powered by the team\u2019s GeForce RTX 3090 GPUs.\n\nThe team added simple height and width to the shapes using ramp controls then copied geometry onto those points inside a loop.\n\nWith the basic structure intact, ManvsMachine sculpted florets individually to create a stunning 3D model in the shape of romanesco broccoli. The RTX-accelerated Karma XPU renderer dramatically sped up animations of the shape, as well.\n\n\u201cCreativity is enhanced by faster ray-traced rendering, smoother 3D viewports, quicker simulations and AI-enhanced image denoising upscaling \u2014 all accelerated by NVIDIA RTX GPUs.\u201d \u2014 ManvsMachine\n\nThe project was then imported to Foundry\u2019s Nuke software for compositing and final touch-ups. When pursuing a softer look, ManvsMachine counteracted the complexity of the animation with some \u201ceasy-on-the-eyes\u201d materials and color choices with a realistic depth of field.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMjgvaW4tdGhlLW52aWRpYS1zdHVkaW8tZmVicnVhcnktMjgv.pdf"}, {"question": "What is the ethos of ManvsMachine?", "gt_answer": "The ethos of ManvsMachine is reflected in their name, with equal importance placed on ideas and execution.", "gt_context": "Many advanced nodes in Nuke are GPU accelerated, which gave the team another speed advantage.\n\nProjects like Roving Romanesco represent the high-quality work ManvsMachine strives to deliver for clients.\n\n\u201cOur ethos is reflected in our name,\u201d said ManvsMachine. \u201cEqual importance is placed on ideas and execution. Rather than sell an idea and then work out how to make it later, the preference is to present clients with the full picture, often leading with technique to inform the creative.\u201d\n\nCheck out @man.vs.machine on Instagram for more inspirational work.\n\nArtists looking to hone their Houdini skills can access Studio Shortcuts and Sessions on the NVIDIA Studio YouTube channel . Discover exclusive step-by-step tutorials from industry-leading artists, watch inspiring community showcases and more, powered by NVIDIA Studio hardware and software .\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/02/28/in-the-nvidia-studio-february-28/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMjgvaW4tdGhlLW52aWRpYS1zdHVkaW8tZmVicnVhcnktMjgv.pdf"}, {"question": "What updates did Adobe introduce in its Creative Cloud and Substance 3D apps?", "gt_answer": "Adobe is all in on the AI revolution, adopting AI-powered features across its lineup of Adobe Creative Cloud and Substance 3D apps. The updates simplify repetitive tasks and make advanced effects accessible.", "gt_context": "Adobe MAX Kicks Off With Creative App Updates and 3D Artist Anna Natter Impresses This Week \u2018In the NVIDIA Studio\u2019\n\nThe reviews are in \u2014 the GeForce RTX 4090 GPU is a game changer for content creation, plus download the AI-powered October NVIDIA Studio Driver.\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we\u2019ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.\n\nAdobe MAX is inspiring artists around the world to bring their ideas to life. The leading creative conference runs through Thursday, Oct. 20, in person and virtually.\n\nWith the recent release of the NVIDIA GeForce RTX 4090 GPU and its third-generation RT Cores, fourth-generation Tensor Cores and eighth-generation NVIDIA Dual AV1 Encoder , NVIDIA is ready to elevate creative workflows for Adobe artists.\n\nPlus, artist Anna Natter transforms 2D photos into full-fidelity 3D assets using the power of AI and state-of-the-art photogrammetry technology this week In the NVIDIA Studio .\n\nThe new Adobe features, the latest NVIDIA Studio laptops and more are backed by the October NVIDIA Studio Driver available for download today.\n\nPress and content creators have been putting the new GeForce RTX 4090 GPU through a wide variety of creative workflows \u2014 here\u2019s a sampling of their reviews:\n\n\u201cNVIDIA\u2019s new flagship graphics card brings massive gains in rendering and GPU compute-accelerated content creation.\u201d \u2014 Forbes\n\n\u201cGeForce RTX 4090 just puts on a clinic, by absolutely demolishing every other card here. In a lot of cases it\u2019s almost cutting rendering times in half.\u201d \u2014 Hardware Canucks\n\n\u201cIf you care about rendering performance to the point that you always lock your eyes on a top-end target, then the RTX 4090 is going to prove to be an absolute screamer..\u201d \u2014 Tech Gage\n\n\u201cThe NVIDIA GeForce RTX 4090 is more powerful than we even thought possible.\u201d \u2014 TechRadar\n\n\u201cAs for the 3D performance of Blender and V-Ray, it delivers a nearly 2x performance increase, which makes it undoubtedly the most powerful weapon for content creators.\u201d \u2014 XFastest\n\n\u201cNVIDIA has been providing Studio drivers for GeForce series graphics cards, they added dual hardware encoders and other powerful tools to help creators maximize their creativity. We can say it\u2019s a new-gen GPU king suitable for top-notch gamers and creators.\u201d \u2014 Techbang\n\nPick up the GeForce RTX 4090 GPU or a pre-built system today by heading to our Product Finder .\n\nAdobe is all in on the AI revolution, adopting AI-powered features across its lineup of Adobe Creative Cloud and Substance 3D apps. The updates simplify repetitive tasks and make advanced effects accessible.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTgvaW4tdGhlLW52aWRpYS1zdHVkaW8tb2N0b2Jlci0xOC8=.pdf"}, {"question": "What are some AI-powered features in Adobe Creative Cloud?", "gt_answer": "Some AI-powered features in Adobe Creative Cloud include RTX acceleration in Premiere Pro for AI effects, the Photo Restoration feature in Photoshop, the Object Selection Tool in Photoshop, and AI-powered masking tools in Lightroom Classic.", "gt_context": "Creators equipped with GeForce RTX GPUs, especially those part of the new RTX 40 Series, are primed to benefit from remarkable GPU acceleration of AI features in Adobe Creative Cloud.\n\nAdobe Premiere Pro is getting RTX acceleration for AI features, resulting in significant performance boosts on AI effects. For example, the Unsharp Mask filter will see an increase of 4.5x, and the Posterize Time effect of over 2x compared to running them on a CPU (performance measured on RTX 3090 Ti and Intel i9 12900K).\n\nThe new beta Photo Restoration feature uses AI-powered neural filters to process imagery, add tone and minimize the effects of film grain. Photo Restoration can be applied to a single image or batches of imagery to quickly and conveniently improve the picture quality of an artist\u2019s portfolio.\n\nPhotoshop\u2019s AI-powered Object Selection Tool allows artists to apply a selection to a particular object within an image. The user can manipulate the selected object, add filters and fine-tune details.\n\nThis saves the huge amount of time it takes artists to mask imagery \u2014 and in beta on the GeForce RTX 3060 Ti is 3x faster than the Intel UHD Graphics 700 and 4x faster than the Apple M1 Ultra.\n\nThe latest version of Adobe Photoshop Lightroom Classic makes it easy for users to create stunning final images with powerful new AI-powered masking tools.\n\nWith just a few clicks, these AI masks can identify and mask key elements within an image, including the main subject, sky and background, and can even select individuals within an image and apply masks to adjust specific areas, such as hair, face, eyes or lips.\n\nSubstance 3D Modeler is now available in general release. Modeler can help create concept art \u2014 it\u2019s perfect for sketching and prototyping, blocking out game levels, crafting detailed characters and props, or sculpting an entire scene in a single app. Its ability to switch between desktop and virtual reality is especially useful, depending on project needs and the artist\u2019s preferred style of working.\n\nSubstance 3D Sampler added its photogrammetry feature, currently in private beta, which automatically converts photos of real-world objects into textured 3D models without the need to fiddle with sliders or tweak values. With a few clicks, the artist can now create 3D assets. This feature serves as a bridge for 2D artists looking to make the leap to 3D.\n\nThese advancements join the existing lineup of GPU-accelerated and AI-enhanced Adobe apps, with features that continue to evolve and improve:\n\nAdobe Camera RAW \u2014 AI-powered Select Objects and Select People masking tools\n\nAfter Effects \u2014 Improved AI-powered Scene Edit Detection and H.264 rendering for faster exports with hardware-accelerated output\n\nIllustrator \u2014 Substance 3D materials plugin for faster access to assets and direct export of Universal Scene Description (USD) files\n\nLightroom Classic \u2014 AI-powered Select Background and Select Sky masking tools\n\nPhotoshop \u2014 Substance 3D materials plugin", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTgvaW4tdGhlLW52aWRpYS1zdHVkaW8tb2N0b2Jlci0xOC8=.pdf"}, {"question": "What software did Anna Natter use to create a virtual 3D copy of her dog?", "gt_answer": "Anna Natter used Substance 3D Sampler to create a virtual 3D copy of her dog.", "gt_context": "Photoshop \u2014 Substance 3D materials plugin\n\nPhotoshop Elements \u2014 AI-powered Moving Elements add motion to a still image\n\nPremiere Elements \u2014 AI-powered Artistic Effects transform clips with effects inspired by famous works of art or popular art styles\n\nPremiere Pro \u2014 Adds Auto Color to apply intelligent color corrections to video clips such as exposure, white balance and contrast that enhance footage, GPU-accelerated Lumetri scopes and faster Motion Graphics Templates\n\nSubstance 3D Painter \u2014 SBSAR Exports for faster exports and custom textures that are easy to plug and play, plus new options to apply blending modes and opacity\n\nTry these features on an NVIDIA Studio system equipped with a GeForce RTX GPU, and experience the ease and speed of RTX-accelerated creation.\n\nThis NVIDIA Studio Driver provides optimal support for the latest new creative applications including Topaz Sharpen AI and DXO Photo. In addition, this NVIDIA Studio Driver supports the new application updates announced at Adobe MAX including Premiere Pro, Photoshop, Photoshop Lightroom Classic and more.\n\nReceive Studio Driver notifications by downloading GeForce Experience or NVIDIA RTX Experience , and by subscribing to the NVIDIA Studio newsletter .\n\nDownload the Studio Driver today.\n\nAnna Natter, this week\u2019s featured In the NVIDIA Studio artist, is a 3D artist at heart that likes to experiment with different mediums. She has a fascination with AI \u2014 both the technology it\u2019s built on and its ever-expanding role in content creation.\n\n\u201cIt\u2019s an interesting debate where the \u2018art\u2019 starts when it comes to AI,\u201d said Natter. \u201cAfter almost a year of playing with AI, I\u2019ve been working on developing my own style and figuring out how I can make it mine.\u201d\n\nIn the image above, Natter applied Photoshop Neural Filters, which were accelerated by her GeForce RTX 3090 GPU. \u201cIt\u2019s always a good idea to use your own art for filters, so you can give everything a unique touch. So if you ask me if this is my art or not, it 100% is!\u201d said the artist.\n\nNatter has a strong passion for photogrammetry, she said, as virtually anything can be preserved in 3D. Photogrammetry features have the potential to save 3D artists countless hours. \u201cI create hyperrealistic 3D models of real-life objects which I could not have done by hand,\u201d she said. \u201cWell, maybe I could\u2019ve, but it would\u2019ve taken forever.\u201d\n\nThe artist even scanned her sweet pup Szikra to create a virtual 3D copy of her that will last forever.\n\nTo test the private beta photogrammetry feature in Substance 3D Sampler, Natter created this realistic tree model with a single series of images.\n\nNatter captured a video of a tree in a nearby park in her home country of Germany. The artist then uploaded the footage to Adobe After Effects, exporting the frames into an image sequence. After Effects contains over 30 features accelerated by RTX GPUs, which improved Natter\u2019s workflow.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTgvaW4tdGhlLW52aWRpYS1zdHVkaW8tb2N0b2Jlci0xOC8=.pdf"}, {"question": "What hardware did Natter use to accelerate her workflow in 3D Stager?", "gt_answer": "Natter used the GeForce RTX 3090 GPU to accelerate her workflow in 3D Stager.", "gt_context": "Once she was happy with the 3D image quality, Natter dropped the model from Substance 3D Sampler into Substance 3D Stager. The artist then applied true-to-life materials and textures to the scene and color matched the details to the scanned model with the Stager color picker.\n\nNatter then lit the scene with a natural outdoor High Dynamic Range Image (HDRI), one of the pre-built environment-lighting options in 3D Stager. \u201cWhat I really like about the Substance 3D suite is that it cuts the frustration out of my workflow, and I can just do my thing in a flow state, without interruption, because everything is compatible and works together so well,\u201d she said.\n\nThe GeForce RTX 3090 GPU accelerated her workflow within 3D Stager, with RTX-accelerated and AI-powered denoising in the viewport unlocking interactivity and smooth movement. When it came time to render, RTX-accelerated ray tracing quickly delivered photorealistic 3D renders, up to 7x faster than with CPU alone.\n\n\u201cI\u2019ve always had an NVIDIA GPU since I\u2019ve been working in video editing for the past decade and wanted hardware that works best with my apps. The GeForce RTX 3090 has made my life so much easier, and everything gets done so much faster.\u201d \u2014 3D artist Anna Natter\n\nNatter can\u2019t contain her excitement for the eventual general release of the Sampler photogrammetry feature. \u201cAs someone who has invested so much in 3D design, I literally can\u2019t wait to see what people are going to create with this,\u201d she said.\n\nCheck out Natter\u2019s Behance page.\n\nNVIDIA Studio wants to see your 2D to 3D progress!\n\nJoin the #From2Dto3D challenge this month for a chance to be featured on the NVIDIA Studio social media channels, like @JennaRambles, whose goldfish sketch was transformed into a beautiful 3D image. Does this tiny sketch count? n #From2Dto3D https://t.co/Jrjpezds6N pic.twitter.com/LkX5TtL6lz\n\n\u2014 Ebb N Flow (@JennaRambles) October 4, 2022\n\nEntering is easy. Simply post a 2D piece of art next to a 3D rendition of it on Instagram , Twitter or Facebook . And be sure to tag #From2Dto3D.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/10/18/in-the-nvidia-studio-october-18/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTgvaW4tdGhlLW52aWRpYS1zdHVkaW8tb2N0b2Jlci0xOC8=.pdf"}, {"question": "What upgrade does HITMAN World of Assassination receive?", "gt_answer": "HITMAN World of Assassination receives DLSS 3 support.", "gt_context": "DLSS 3 Delivers Ultimate Boost in Latest Game Updates on GeForce NOW\n\n\u2018HITMAN World of Assassination,\u2019 \u2018Marvel\u2019s Midnight Suns\u2019 downloadable content adding latest Ultimate tech soon, plus six new games join the cloud.\n\nAuthor: GeForce NOW Community\n\nGeForce NOW RTX 4080 SuperPODs are rolling out now, bringing RTX 4080-class performance and features to Ultimate members \u2014 including support for NVIDIA Ada Lovelace GPU architecture technologies like NVIDIA DLSS 3 .\n\nThis GFN Thursday brings updates to some of GeForce NOW\u2019s hottest games that take advantage of these amazing technologies, all from the cloud.\n\nPlus, RTX 4080 SuperPOD upgrades are nearly finished in the London data center, expanding the number of regions where Ultimate members can experience the most powerful cloud gaming technology on the planet. Look for updates on Twitter once the upgrade is complete and be sure to check back each week to see which cities light up next on the map .\n\nMembers can also look for six more supported games in the GeForce NOW library this week.\n\nNVIDIA DLSS has revolutionized graphics rendering, using AI and GeForce RTX Tensor Cores to boost frame rates while delivering crisp, high-quality images that rival native resolution.\n\nPowered by new hardware capabilities of the Ada Lovelace architecture, DLSS 3 generates entirely new high-quality frames, rather than just pixels. It combines DLSS Super Resolution technology and DLSS Frame Generation to reconstruct seven-eighths of the displayed pixels, accelerating performance.\n\nDLSS 3 games are backwards compatible with DLSS 2 technology \u2014 when developers integrate DLSS 3, DLSS 2, aka DLSS Super Resolution, is supported by default. Additionally, integrations of DLSS 3 include NVIDIA Reflex , reducing system latency for all GeForce RTX users and making games more responsive.\n\nSupport for DLSS 3 is growing, and soon GeForce NOW Ultimate members can experience this technology in new updates to HITMAN World of Assassination and Marvel\u2019s Midnight Suns .\n\nThe critically acclaimed HITMAN 3 from IOI transforms into HITMAN World of Assassination , an upgrade that includes content from HITMAN 1, HITMAN 2 and HITMAN 3. With DLSS 3 support, streaming from the cloud in 4K looks better than ever, even with ray tracing and settings cranked to the max.\n\nBecome legendary assassin Agent 47 and use creativity and improvisation to execute ingenious, spectacular eliminations in sprawling sandbox locations all around the globe. Stick to the shadows to stalk and eliminate targets \u2014 or take them out in plain sight.\n\nAlong with DLSS 3 support, Ultimate members can enjoy ray-traced opaque reflections and shadows in the world of HITMAN as they explore open-world missions with multiple ways to succeed.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjYvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktamFuLTI2Lw==.pdf"}, {"question": "What new content does Marvel's Midnight Suns' first DLC add?", "gt_answer": "Marvel's Midnight Suns' first DLC, The Good, The Bad, and the Undead, adds Deadpool to the team roster, along with new story missions, new enemies and more.", "gt_context": "Marvel\u2019s Midnight Suns \u2019 first downloadable content, The Good, The Bad, and the Undead, adds Deadpool to the team roster, along with new story missions, new enemies and more. Add in DLSS 3 support coming soon, and Ultimate members have a lot to look forward to.\n\nLaunched last month to critical acclaim, VGC awarded Marvel\u2019s Midnight Suns with a five-out-of-five rating, calling it a \u201cmodern strategy classic.\u201d PC Gamer said it was \u201ccompletely brilliant\u201d and scored it an\n\n88 out of 100, and Rock Paper Shotgun called it \u201cone of the best superhero games full stop.\u201d\n\nUltimate members can explore the abbey grounds and get to know the Merc with a Mouth at up to 4K resolutions and 120 frames per second, or immerse themselves in their mission with ultrawide resolutions at up to 3840 x 1600 at 120 frames per second \u2014 plus many other popular formats including 3440 x 1440 and 2560 x 1080.\n\nGeForce NOW members can also take their games and save data with them wherever they go, from underpowered PCs to Macs, Samsung and LG TVs, mobile devices and Chromebooks.\n\nGet ready to game: Six more games join the supported list in the GeForce NOW library this week:\n\nTom Clancy\u2019s Ghost Recon: Breakpoint (New release on Steam , Jan. 23)\n\nOddballers (New release on Ubisoft Connect , Jan. 26)\n\nWatch Dogs: Legion (New release on Steam , Jan. 26)\n\nCygnus Enterprises ( Steam )\n\nRain World ( Steam )\n\nThe Eternal Cylinder ( Steam )\n\nThere\u2019s only one question left to kick off a weekend full of gaming in the cloud. Let us know on Twitter or in the comments below. What's your favorite game to play in the cloud with #RTXON ? nn \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) January 25, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/26/geforce-now-thursday-jan-26/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjYvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktamFuLTI2Lw==.pdf"}, {"question": "What is Matice Biosciences using AI for?", "gt_answer": "Matice Biosciences is using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians.", "gt_context": "Matice Founder and Harvard Professor Jessica Whited on Harnessing Regenerative Species \u2014 and AI \u2014 for Medical Breakthroughs\n\nAuthor: Brian Caulfield\n\nScientists at Matice Biosciences are using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians.\n\nThe goal of the research is to develop new treatments that will help humans heal from injuries without scarring.\n\nOn the latest episode of NVIDIA\u2019s AI Podcast , host Noah Kravtiz spoke with Jessica Whited, a regenerative biologist at Harvard University and co-founder of Matice Biosciences.\n\nWhited was inspired to start the company after her son suffered a severe injury while riding his bike.\n\nShe realized that while her work had been dedicated ultimately to limb regeneration, the short-term byproduct of it was a wealth of information that could be used to harness this regenerative science into topical treatments that can be put in the hands of everyday people, like her son and many others, who would no longer have to live with the physical scars of their trauma.\n\nThis led her to investigate the connection between regeneration and scarring.\n\nWhited and her team are using AI to analyze the molecular and cellular mechanisms that control regeneration and scarring in super-regenerators.\n\nThey believe that by understanding these mechanisms, they can develop new treatments to help humans heal from injuries without scarring.\n\nLearn more about Matice at www.maticebio.com or on Instagram , Twitter , Facebook and LinkedIn .\n\nJules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games\n\nA postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb \u2014 right down to the finger motions \u2014 with their minds.\n\nOverjet\u2019s Ai Wardah Inam on Bringing AI to Dentistry\n\nOverjet, a member of NVIDIA Inception , is moving fast to bring AI to dentists\u2019 offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.\n\nImmunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs\n\nLuis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.\n\nThe AI Podcast is now available through Amazon Music .\n\nIn addition, get the AI Podcast through iTunes , Google Podcasts , Google Play , Castbox , DoggCatcher, Overcast , PlayerFM , Pocket Casts, Podbay , PodBean , PodCruncher, PodKicker, Soundcloud , Spotify , Stitcher and TuneIn .\n\nMake the AI Podcast better. Have a few minutes to spare? Fill out this listener survey .\n\nFeatured Image Credit: Matice Biosciences\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/06/28/matice-podcast/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjgvbWF0aWNlLXBvZGNhc3Qv.pdf"}, {"question": "What is the first priority of the Alliance for OpenUSD?", "gt_answer": "The first priority of the alliance is to develop a specification that describes the core functionality of OpenUSD.", "gt_context": "NVIDIA Helps Forge Forum to Set OpenUSD Standard for 3D Worlds\n\nThe Alliance for OpenUSD will ensure compatibility in 3D tools and content for digitalization across industries.\n\nAuthor: Guy Martin\n\nNVIDIA joined Pixar, Adobe, Apple and Autodesk today to found the Alliance for OpenUSD , a major leap toward unlocking the next era of 3D graphics, design and simulation.\n\nThe group will standardize and extend OpenUSD, the open-source Universal Scene Description framework that\u2019s the foundation of interoperable 3D applications and projects ranging from visual effects to industrial digital twins.\n\nSeveral leading companies in the 3D ecosystem already signed on as the alliance\u2019s first general members \u2014 Cesium, Epic Games, Foundry, Hexagon, IKEA, SideFX and Unity.\n\nStandardizing OpenUSD will accelerate its adoption, creating a foundational technology that will help today\u2019s 2D internet evolve into a 3D web. Many companies are already working with NVIDIA to pioneer this future.\n\nOpenUSD is the foundation of NVIDIA Omniverse , a development platform for connecting and building 3D tools and applications. Omniverse is helping companies like Heavy.AI , Kroger and Siemens build and test physically accurate simulations of factories, retail locations, skyscrapers, sports cars and more.\n\nFor IKEA, OpenUSD represents \u201ca nonproprietary standard format to author and store 3D content to connect our value chain even closer, and develop home furnishing solutions to a lower price,\u201d Martin Enthed, an innovation manager at IKEA, said in a press release the alliance issued today.\n\n\u201cBy joining the alliance, we\u2019re demonstrating our dedication to the advantages that OpenUSD provides our clients when linking with cloud-based platforms, including Nexus , Hexagon\u2019s manufacturing platform, HxDR , Hexagon\u2019s digital reality platform, and NVIDIA Omniverse to build innovative solutions in their industries,\u201d said Burkhard Boeckem, CTO of Hexagon .\n\nPixar started work on USD in 2012 as a 3D foundation for its feature films, offering interoperability across data and workflows. The company made this powerful, multifaceted technology open source four years later, so anyone can use OpenUSD and contribute to its development.\n\nOpenUSD supports the requirements of building virtual worlds \u2014 like geometry, cameras, lights and materials. It also includes features necessary for scaling to large, complex datasets, and it\u2019s tremendously extensible, enabling the technology to be adapted to workflows beyond visual effects.\n\nOne unique capability of OpenUSD is its layering system, which lets users collaborate in real time without stepping on each other\u2019s toes. For example, one artist can model a scene while others create the lighting for it.\n\nAs its first priority, the alliance will develop a specification that describes the core functionality of OpenUSD. That\u2019ll provide a recipe tool builders can implement, encouraging adoption of the open standard across the widest possible array of use cases.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDEvb3BlbnVzZC1hbGxpYW5jZS0zZC1zdGFuZGFyZC8=.pdf"}, {"question": "What organization does the alliance operate under?", "gt_answer": "The alliance operates as part of the Joint Development Foundation (JDF), a branch of the Linux Foundation.", "gt_context": "The alliance will operate as part of the Joint Development Foundation (JDF), a branch of the Linux Foundation. The JDF provides a path to turn written specifications into industry standards suitable for adoption by globally respected groups like the International Organization for Standardization, or the ISO.\n\nNVIDIA has a deep commitment to OpenUSD and working with ecosystem partners to accelerate the framework\u2019s evolution and adoption across industries.\n\nAt last year\u2019s SIGGRAPH , NVIDIA detailed a multiyear roadmap of contributions it\u2019s making to enable OpenUSD use in architecture, engineering, manufacturing and more. An update on these plans will be presented by NVIDIA as part of the alliance at this year\u2019s conference on computer graphics .\n\nCollaboration is key to the alliance and evolution of OpenUSD.\n\nTo get involved or learn more, attend NVIDIA\u2019s keynote , OpenUSD day , hands-on labs and other showfloor activities at SIGGRAPH , running Aug. 6-10.\n\nThe Alliance for OpenUSD also will host a keynote panel session at the Academy Software Foundation\u2019s Open Source Days 2023.\n\nFor more information on the Alliance for OpenUSD (AOUSD), visit the webpage , and follow @AllianceOpenUSD on Twitter , Instagram , Facebook , and LinkedIn .\n\nFor a deeper dive on OpenUSD:\n\nCheck out our USD resources page .\n\nWatch a video series on getting started with OpenUSD .\n\nTake a course on using OpenUSD in 3D workflows .\n\nAnd watch a webinar about building applications with NVIDIA Omniverse .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/01/openusd-alliance-3d-standard/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDEvb3BlbnVzZC1hbGxpYW5jZS0zZC1zdGFuZGFyZC8=.pdf"}, {"question": "What features does the new NVIDIA RTX Enterprise driver provide?", "gt_answer": "The new NVIDIA RTX Enterprise driver provides support for the RTX 6000 Ada Generation GPU, performance increases in applications, new capabilities enabled by the Ada Lovelace architecture, and enhancements to multi-display deployments and workspace customization.", "gt_context": "New NVIDIA RTX Enterprise Driver Enhances Graphics Workflows With Support for Latest RTX 6000 Ada Generation GPU Latest release delivers speedups in applications, new display features and access to new capabilities made possible by the NVIDIA Ada Lovelace architecture.\n\nAuthor: Daniel Lee\n\nWhether for creating digital content, designing products or analyzing data, professional workflows are becoming more complex, interactive and collaborative. Professionals are using powerful NVIDIA RTX GPUs to tackle these workflows \u2014 and with NVIDIA RTX Enterprise drivers, regular updates optimize and increase the performance of these GPUs.\n\nAvailable now, NVIDIA RTX Enterprise Release 525 (R525) provides support for the RTX 6000 Ada Generation GPU , NVIDIA\u2019s most powerful workstation GPU yet.\n\nWorkstation professionals can also take advantage of new features in NVIDIA MOSAIC technology to enhance multi-display deployments, and further customize their workspaces with new capabilities in NVIDIA RTX Desktop Manager .\n\nThe R525 driver provides support for NVIDIA RTX 6000, enabling professionals to accomplish the most challenging and demanding projects as rapidly as possible. Based on the NVIDIA Ada Lovelace GPU architecture, the card features third-generation RT Cores, fourth-generation Tensor Cores and next-gen CUDA cores with 48GB of graphics memory for powerful rendering, AI, graphics and compute performance.\n\nWith support for RTX 6000, the R525 driver provides performance increases over the previous R515 driver, including up to 9% gains in applications such as Adobe Media Encoder, Keyshot Viewer and SOLIDWORKS Visualize. 1\n\nR525 expands features of NVIDIA MOSAIC , the advanced multi-display technology for spanning desktops across screens. The driver enhances support for mixed displays that are running on custom compositors \u2014 such as specialized displays \u2014 as well as the standard Windows system compositor.\n\nWith this new capability, workstation professionals can use either a single GPU or multiple ones to scale out displays over a single desktop, which helps simplify application deployment and delivers a more aesthetic viewing experience.\n\nAdditionally, developers can take advantage of new Vulkan extensions, including access to new capabilities made possible by the Ada Lovelace architecture, such as deep learning frame generation with the Optical Flow Accelerator .\n\nThe NVIDIA RTX Desktop Manager , included with the R525 driver for Windows, allows users to manage single or multi-display workspaces with ease, providing maximum flexibility and control over display real estate and desktops.\n\nRTX Desktop Manager delivers new features in its latest update, including:\n\nA quick-access button that lets professionals easily access the features they often use, such as sending a window to different displays or resizing a window to a grid.\n\nA toggle feature that allows users to switch between viewing grid sizes.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMzAvcnR4LWVudGVycHJpc2UtZHJpdmVyLXJlbGVhc2Uv.pdf"}, {"question": "What benchmark applications were used to obtain the results?", "gt_answer": "SOLIDWORKS Visualize, Keyshot, and Adobe AME.", "gt_context": "A toggle feature that allows users to switch between viewing grid sizes.\n\nLearn more about the newest release of the NVIDIA Enterprise Driver .\n\n1. Results were obtained from SOLIDWORKS Visualize, Keyshot, and Adobe AME benchmark applications performed on a test system comprised of a 12th Gen Intel Core i9-12900K processor with 32GB (2x16GB) RAM running Microsoft Windows 11 Enterprise. Testing was conducted with NVIDIA RTX A4000, A5500, and A6000 GPUs using the NVIDIA RTX Enterprise Driver, versions 526.67 and 516.59.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/30/rtx-enterprise-driver-release/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMzAvcnR4LWVudGVycHJpc2UtZHJpdmVyLXJlbGVhc2Uv.pdf"}, {"question": "What benefits can enterprises realize with NVIDIA DGX-Ready Managed Services?", "gt_answer": "Enterprises can realize benefits such as filling critical IT skills gaps and gaining direct access to AI expertise through NVIDIA DGX-Ready Managed Services.", "gt_context": "At NetApp INSIGHT 2022, Dive Into NVIDIA DGX BasePOD and NetApp ONTAP AI Explore the latest on infrastructure solutions, machine learning operations and managed services at NetApp\u2019s conference, running Nov. 1-3.\n\nAuthor: Tony Paikeday\n\nAs more businesses seek to operationalize AI use cases faster and more cost effectively, IT platforms are becoming central to that effort.\n\nAt this week\u2019s NetApp INSIGHT , a conference on data management and the hybrid multicloud, attendees can explore solutions that let enterprises move beyond prototypes to the deployment of proven models. These solutions can speed return on investment for AI and MLOps .\n\nNVIDIA, a sponsor of NetApp INSIGHT 2022, will share how enterprises can learn to spend more time focusing on their core missions rather than wrestling with infrastructure.\n\nEarlier this year, NVIDIA launched DGX BasePOD , an evolution of the DGX POD program and reference architecture. It delivers a new generation of infrastructure solutions for enterprises built on NVIDIA DGX systems with AMD EPYC CPUs, NVIDIA networking and its ecosystem of storage partners like NetApp.\n\nThe reference architecture gives businesses a valuable complement to the NVIDIA DGX SuperPOD data-center infrastructure platform for AI workflows. As its name suggests, DGX BasePOD is the base on which value-added solutions, including the NetApp ONTAP AI infrastructure stack, are built.\n\nNVIDIA and NetApp have a growing ecosystem of MLOps partners whose technologies layer on top of ONTAP AI to create complete solution stacks. Incorporating MLOps workflow management, these offerings serve as a platform on which IT teams can scale model pipelines and shorten the time to bring AI into production.\n\nOrganizations now have two choices for infrastructure design with NetApp and NVIDIA:\n\nThe turnkey DGX SuperPOD is a physical replica of NVIDIA infrastructure, backed by performance guarantees on specific workloads.\n\nNetApp ONTAP AI, based on the DGX BasePOD reference architecture, provides flexibility in key component choices and network architecture by enabling teams to alternatively deploy and work with NVIDIA DGX-certified solution providers to customize environments that suit their needs and scale to their objectives.\n\nNVIDIA DGX-Ready Managed Services can help users take the infrastructure pain out of clients\u2019 hands. With this approach, enterprises can realize benefits such as:\n\nFilling critical IT skills gaps that the business previously couldn\u2019t afford to invest in, as they distracted from the core mission.\n\nDirect access to AI expertise. Many businesses don\u2019t have AI experts who understand the latest innovations in model development and deployment at scale. These are offered from NVIDIA and NetApp teams through DGX-Ready Managed Services.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMDEvbmV0YXBwLWluc2lnaHQtZGd4LWJhc2Vwb2Qv.pdf"}, {"question": "What can NetApp INSIGHT offer in terms of AI infrastructure solutions?", "gt_answer": "NetApp INSIGHT offers an experience akin to having an AI Center of Excellence or an AI private cloud. They provide an economical outsource model for scaling AI development without the need to wrestle with infrastructure.", "gt_context": "Gaining an experience akin to having an AI Center of Excellence or an AI private cloud \u2014 enabled with an economical outsource model for scaling AI development and without the need to wrestle with infrastructure.\n\nIn addition, check out the following at NetApp INSIGHT:\n\nSession: Making IT the Hero of AI in 2023: What Leaders Need to Know\n\nSession: NVIDIA DGX SuperPOD with NetApp\n\nHands-on-lab: Building an AI Data Pipeline with NetApp and NVIDIA\n\nJoin NVIDIA at NetApp INSIGHT to learn more about these infrastructure solutions and how to make scaling AI applications faster, easier and more cost effective.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/01/netapp-insight-dgx-basepod/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMDEvbmV0YXBwLWluc2lnaHQtZGd4LWJhc2Vwb2Qv.pdf"}, {"question": "What is the next-generation GH200 Grace Hopper superchip platform?", "gt_answer": "The next-generation GH200 Grace Hopper superchip platform is a dual configuration that comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance, and 282GB of the latest HBM3e memory technology.", "gt_context": "SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.\n\nAuthor: Brian Caulfield\n\nAs generative AI continues to sweep an increasingly digital, hyperconnected world, NVIDIA founder and CEO Jensen Huang made a thunderous return to SIGGRAPH, the world\u2019s premier computer graphics conference.\n\n\u201cThe generative AI era is upon us, the iPhone moment if you will,\u201d Huang told an audience of thousands Tuesday during an in-person special address in Los Angeles.\n\nNews highlights include the next-generation GH200 Grace Hopper Superchip platform , NVIDIA AI Workbench \u2014 a new unified toolkit that introduces simplified model tuning and deployment on NVIDIA AI platforms \u2014 and a major upgrade to NVIDIA Omniverse with generative AI and OpenUSD .\n\nThe announcements are about bringing all of the past decade\u2019s innovations \u2014 AI, virtual worlds, acceleration, simulation, collaboration and more \u2014 together.\n\n\u201cGraphics and artificial intelligence are inseparable, graphics needs AI, and AI needs graphics,\u201d Huang said, explaining that AI will learn skills in virtual worlds, and that AI will help create virtual worlds.\n\nFive years ago at SIGGRAPH, NVIDIA reinvented graphics by bringing AI and real-time ray tracing to GPUs. But \u201cwhile we were reinventing computer graphics with artificial intelligence, we were reinventing the GPU altogether for artificial intelligence,\u201d Huang said.\n\nThe result: increasingly powerful systems such as the NVIDIA HGX H100, which harnesses eight GPUs \u2014 and a total of 1 trillion transistors \u2014 that offer dramatic acceleration over CPU-based systems.\n\n\u201cThis is the reason why the world\u2019s data centers are rapidly transitioning to accelerated computing,\u201d Huang told the audience. \u201cThe more you buy, the more you save.\u201d\n\nTo continue AI\u2019s momentum, NVIDIA created the Grace Hopper Superchip, the NVIDIA GH200, which combines a 72-core Grace CPU with a Hopper GPU, and which went into full production in May.\n\nHuang announced that NVIDIA GH200, which is already in production, will be complemented with an additional version with cutting-edge HBM3e memory.\n\nHe followed up on that by announcing the next-generation GH200 Grace Hopper superchip platform with the ability to connect multiple GPUs for exceptional performance and easily scalable server design.\n\nBuilt to handle the world\u2019s most complex generative workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations.\n\nThe dual configuration \u2014 which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering \u2014 comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance, and 282GB of the latest HBM3e memory technology.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDgvc2lnZ3JhcGgtMjAyMy1zcGVjaWFsLWFkZHJlc3Mv.pdf"}, {"question": "What is NVIDIA AI Workbench?", "gt_answer": "NVIDIA AI Workbench is a unified toolkit that provides developers with an easy-to-use platform to create, test, and fine-tune generative AI models on a PC or workstation. It allows developers to customize and run generative AI with just a few clicks, removing the complexity of getting started with an enterprise AI project.", "gt_context": "Leading system manufacturers are expected to deliver systems based on the platform in the second quarter of 2024.\n\nTo speed custom adoption of generative AI for the world\u2019s enterprises, Huang announced NVIDIA AI Workbench. It provides developers with a unified, easy-to-use toolkit to quickly create, test and fine-tune generative AI models on a PC or workstation \u2014 then scale them to virtually any data center, public cloud or NVIDIA DGX Cloud .\n\nAI Workbench removes the complexity of getting started with an enterprise AI project. Accessed through a simplified interface running on a local system, it allows developers to fine-tune models from popular repositories such as Hugging Face, GitHub and NGC using custom data. The models can then be shared easily across multiple platforms.\n\nWhile hundreds of thousands of pretrained models are now available, customizing them with the many open-source tools available can be challenging and time consuming.\n\n\u201cIn order to democratize this ability, we have to make it possible to run pretty much everywhere,\u201d Huang said.\n\nWith AI Workbench, developers can customize and run generative AI in just a few clicks. It allows them to pull together all necessary enterprise-grade models, frameworks, software development kits and libraries into a unified developer workspace.\n\n\u201cEverybody can do this,\u201d Huang said.\n\nLeading AI infrastructure providers \u2014 including Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo and Supermicro \u2014 are embracing AI Workbench for its ability to bring enterprise generative AI capability to wherever developers want to work \u2014 including a local device.\n\nHuang also announced a partnership between NVIDIA and startup Hugging Face , which has 2 million users, that will put generative AI supercomputing at the fingertips of millions of developers building large language models and other advanced AI applications.\n\nDevelopers will be able to access NVIDIA DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models.\n\n\u201cThis is going to be a brand new service to connect the world\u2019s largest AI community to the world\u2019s best training and infrastructure,\u201d Huang said.\n\nIn a video, Huang showed how AI Workbench and ChatUSD bring it all together: allowing a user to start a project on a GeForce RTX 4090 laptop and scale, seamlessly to a workstation, or the data center as it grows more complex.\n\nUsing Jupyter Notebook, a user can prompt the model to generate a picture of Toy Jensen in space. When the model provides a result that doesn\u2019t work, because it\u2019s never seen Toy Jensen, the user can fine-tune the model with eight images of Toy Jensen and then prompt it again to get a correct result.\n\nThen with AI Workbench, the new model can be deployed to an enterprise application.\n\nIn a further step to accelerate the adoption of generative AI, NVIDIA announced the latest version of its enterprise software suite, NVIDIA AI Enterprise 4.0 .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDgvc2lnZ3JhcGgtMjAyMy1zcGVjaWFsLWFkZHJlc3Mv.pdf"}, {"question": "What are some of the new applications and services offered by NVIDIA AI Enterprise?", "gt_answer": "NVIDIA AI Enterprise offers new foundation applications and services for developers and industrial enterprises to optimize and enhance their 3D pipelines with the OpenUSD framework and generative AI.", "gt_context": "NVIDIA AI Enterprise gives businesses access to the tools needed to adopt generative AI, while also offering the security and API stability required for large-scale enterprise deployments.\n\nOffering new foundation applications and services for developers and industrial enterprises to optimize and enhance their 3D pipelines with the OpenUSD framework and generative AI , Huang announced a major release of NVIDIA Omniverse, an OpenUSD-native development platform for building, simulating, and collaborating across tools and virtual worlds.\n\nHe also announced NVIDIA\u2019s contributions to OpenUSD, the framework and universal interchange for describing, simulating and collaborating across 3D tools. Updates to the Omniverse platform include advancements to Omniverse Kit \u2014 the engine for developing native OpenUSD applications and extensions \u2014 as well as to the NVIDIA Omniverse Audio2Face foundation app and spatial-computing\n\ncapabilities .\n\nCesium, Convai, Move AI, SideFX Houdini and Wonder Dynamics are now connected to Omniverse via OpenUSD.\n\nAnd expanding their collaboration across Adobe Substance 3D, generative AI and OpenUSD initiatives, Adobe and NVIDIA announced plans to make Adobe Firefly \u2014 Adobe\u2019s family of creative generative AI models \u2014 available as APIs in Omniverse.\n\nOmniverse users can now build content, experiences and applications that are compatible with other OpenUSD-based spatial computing platforms such as ARKit and RealityKit. Huang announced a broad range of frameworks, resources and services for developers and companies to accelerate the adoption of Universal Scene Description, known as OpenUSD , including contributions such as geospatial data models, metrics assembly and simulation-ready, or SimReady , specifications for OpenUSD. Huang also announced four new Omniverse Cloud APIs built by NVIDIA for developers to more seamlessly implement and deploy OpenUSD pipelines and applications.\n\nChatUSD \u2014 Assisting developers and artists working with OpenUSD data and scenes, ChatUSD is a large language model (LLM) agent for generating Python-USD code scripts from text and answering USD knowledge questions.\n\nRunUSD \u2014 a cloud API that translates OpenUSD files into fully path-traced rendered images by checking compatibility of the uploaded files against versions of OpenUSD releases, and generating renders with Omniverse Cloud.\n\nDeepSearch \u2014 an LLM agent enabling fast semantic search through massive databases of untagged assets.\n\nUSD-GDN Publisher \u2014 a one-click service that enables enterprises and software makers to publish high-fidelity, OpenUSD-based experiences to the Omniverse Cloud Graphics Delivery Network (GDN) from an Omniverse-based application such as USD Composer , as well as stream in real time to web browsers and mobile devices.\n\nThese contributions are an evolution of last week\u2019s announcement of NVIDIA\u2019s co-founding of the Alliance for OpenUSD along with Pixar, Adobe, Apple and Autodesk.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDgvc2lnZ3JhcGgtMjAyMy1zcGVjaWFsLWFkZHJlc3Mv.pdf"}, {"question": "What systems are being announced by NVIDIA and global workstation manufacturers?", "gt_answer": "NVIDIA and global workstation manufacturers are announcing powerful new RTX workstations for development and content creation.", "gt_context": "Providing more computing power for all of this, Huang said NVIDIA and global workstation manufacturers are announcing powerful new RTX workstations for development and content creation in the age of generative AI and digitization.\n\nThe systems, including those from BOXX, Dell Technologies, HP and Lenovo, are based on NVIDIA RTX 6000 Ada Generation GPUs and incorporate NVIDIA AI Enterprise and NVIDIA Omniverse Enterprise software.\n\nSeparately, NVIDIA released three new desktop workstation Ada Generation GPUs \u2014 the NVIDIA RTX 5000 , RTX 4500 and RTX 4000 \u2014 to d eliver the latest AI, graphics and real-time rendering technology to professionals worldwide.\n\nHuang also detailed how, together with global data center system manufacturers, NVIDIA is continuing to supercharge generative AI and industrial digitization with new NVIDIA OVX featuring the new NVIDIA L40S GPU, a powerful, universal data center processor design.\n\nThe powerful new systems will accelerate the most compute-intensive, complex applications, including AI training and inference, 3D design and visualization, video processing and industrial digitalization with the NVIDIA Omniverse platform.\n\nMore innovations are coming, thanks to NVIDIA Research.\n\nAt the show\u2019s Real Time Live Event, NVIDIA researchers will demonstrate a generative AI workflow that helps artists rapidly create and iterate on materials for 3D scenes, using text or image prompts to\n\ngenerate custom textured materials faster and with finer creative control.\n\nAnd NVIDIA Research also demo\u2019d how AI can take video conferencing to the next level with new 3D features. NVIDIA Research recently published a paper demonstrating how AI could power a 3D video-conferencing system with minimal capture equipment.\n\nThe production version of Maxine, now available in NVIDIA Enterprise, allows professionals, teams, creators and others to tap into the power of AI to create high-quaity audio and video effects, even using standard microphone and webcams. Watch Huang\u2019s full special address at NVIDIA\u2019s SIGGRAPH event site . where there are also details of labs, presentations and more happening throughout the show.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/08/siggraph-2023-special-address/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMDgvc2lnZ3JhcGgtMjAyMy1zcGVjaWFsLWFkZHJlc3Mv.pdf"}, {"question": "What is the focus of the healthcare program funded by the Wallenberg Foundation?", "gt_answer": "The healthcare program, funded by the Wallenberg Foundation, will employ AI to understand protein folding, which is fundamentally important to understanding diseases like cancer.", "gt_context": "Supersizing AI: Sweden Turbocharges Its Innovation Engine\n\nThe Berzelius supercomputer at Link\u00f6ping University is gearing up for a major upgrade to drive advances in machine learning, healthcare and more.\n\nAuthor: Magnus Weberg\n\nSweden is outfitting its AI supercomputer for a journey to the cutting edge of machine learning, robotics and healthcare.\n\nIt couldn\u2019t ask for a better guide than Anders Ynnerman (above). His signature blue suit, black spectacles and gentle voice act as calm camouflage for a pioneering spirit.\n\nEarly on, he showed a deep interest in space, but his career took a different direction. He established the country\u2019s first network of supercomputing centers and went on to pioneer scientific visualization technologies used in hospitals and museums around the world.\n\nToday, he leads Sweden\u2019s largest research effort, WASP \u2014 the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program \u2014 focused on AI innovation.\n\n\u201cThis is a year when people are turning their focus to sustainability challenges we face as a planet,\u201d said the Link\u00f6ping University professor. \u201cWithout advances in AI and other innovations, we won\u2019t have a sustainable future.\u201d\n\nTo supercharge environmental efforts and more, Sweden will upgrade its Berzelius supercomputer . Based on the NVIDIA DGX SuperPOD , it will deliver nearly half an exaflop of AI performance, placing it among the world\u2019s 100 fastest AI supercomputers.\n\n\u201cA machine like Berzelius is fundamental not only for the results it delivers, but the way it catalyzes expertise in Sweden,\u201d he said. \u201cWe\u2019re a knowledge-driven nation, so our researchers and companies need access to the latest technology to compete.\u201d\n\nIn June, the system trained GPT-SW3 , a family of large language models capable of drafting a speech or answering questions in Swedish.\n\nToday, a more powerful version sports 20 billion parameters, a popular measure of a neural network\u2019s smarts. It can help developers write software and handle other complex tasks.\n\nLong term, researchers aim to train a version with a whopping 175 billion parameters that\u2019s also fluent in Nordic languages like Danish and Norwegian.\n\nOne of Sweden\u2019s largest banks is already exploring use of the latest GPT-SW3 variant for a chatbot and other applications.\n\nTo build big AIs, Berzelius will add 34 NVIDIA DGX A100 systems to its cluster of 60 that makeup the SuperPOD. The new units will sport GPUs with 80GB of memory each.\n\n\u201cHaving really fat nodes with large memory is important for some of these models,\u201d Ynnerman said. Atos, the system integrator, is providing \u201ca very smooth ride getting the whole process set up,\u201d he added.\n\nIn healthcare, a data-driven life sciences program , funded by the Wallenberg Foundation, will be a major Berzelius user. The program spans 10 universities and will, among other applications, employ AI to understand protein folding, fundamentally important to understanding diseases like cancer.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjQvYWktc3dlZGVuLWJlcnplbGl1cy8=.pdf"}, {"question": "What project is exploring how autonomous systems act in space and undersea?", "gt_answer": "One project is exploring how autonomous systems act in space and undersea.", "gt_context": "Others will use Berzelius to improve detection of cancer cells and navigate the massive mounds of data in human genomes.\n\nSome researchers are exploring tools such as NVIDIA Omniverse Avatar Cloud Engine and NVIDIA BotMaker to create animated patients. Powered by GPT-SW3, they could help doctors practice telemedicine skills.\n\nSweden\u2019s work in image and video recognition will get a boost from Berzelius. Such algorithms advance work on the autonomous systems used in modern factories and warehouses.\n\nOne project is exploring how autonomous systems act in space and undersea. It\u2019s a topic close to the heart of a recent addition to WASP, researcher Christer Fuglesang, who was named Sweden\u2019s first astronaut in 1992.\n\nFuglesang went to the International Space Station in 2006 and 2008. Later, as a professor of physics at Sweden\u2019s Royal Institute of Technology, he collaborated with Ynnerman on live shows about life in space, presented in the WISDOME dome theater at the Visualization Center C Ynnerman founded and directs.\n\nThanks to his expertise in visualization, \u201cI can go to Mars whenever I want,\u201d Ynnerman quipped.\n\nHe took NVIDIA founder and CEO Jensen Huang and Marcus Wallenberg \u2014 scion of Sweden\u2019s leading industrial family \u2014 on a tour of outer space at the dome to mark the Berzelius upgrade. The dome can show the Martian surface in 8K resolution at 120 frames per second, thanks to its use of 12 NVIDIA Quadro RTX 8000 GPUs .\n\nYnnerman\u2019s algorithms have touched millions who\u2019ve seen visualizations of Egyptian mummies at the British Museum.\n\n\u201cThat makes me even more proud than some of my research papers because many are young people we can inspire with a love for science and technology,\u201d he said.\n\nA passion for science and technology has attracted more than 400 active Ph.D. candidates so far to WASP, which is on the way to exceeding its goal of 600 grads by 2031.\n\nBut even a visualization specialist can\u2019t be everywhere. So Ynnerman\u2019s pet project will use AI to create a vibrant, virtual museum guide.\n\n\u201cI think we can provide more people a \u2018wow\u2019 experience \u2014 I want a copilot when I\u2019m navigating the universe,\u201d he said.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/24/ai-sweden-berzelius/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjQvYWktc3dlZGVuLWJlcnplbGl1cy8=.pdf"}, {"question": "What is the reward for celebrating the third anniversary of GeForce NOW?", "gt_answer": "The reward for celebrating the third anniversary of GeForce NOW is a free Dying Light 2 outfit called 'Post-Apo', along with additional loot for Ultimate and Priority members.", "gt_context": "Gather Your Party: GFN Thursday Brings \u2018Baldur\u2019s Gate 3\u2019 to the Cloud\n\nCelebrate #3YearsOfGFN with a \u2018Dying Light 2\u2019 reward and three new games.\n\nAuthor: GeForce NOW Community\n\nVenture to the Forgotten Realms this GFN Thursday in Baldur\u2019s Gate 3, streaming on GeForce NOW .\n\nCelebrations for the cloud gaming service\u2019s third anniversary continue with a Dying Light 2 reward that\u2019s to die for. It\u2019s the cherry on top of three new titles joining the GeForce NOW library this week.\n\nMysterious abilities are awakening inside you. Embrace corruption or fight against darkness itself in Baldur\u2019s Gate 3 ( Steam ) \u2013 a next-generation role-playing game, set in the world of Dungeons and Dragons .\n\nChoose from a wide selection of D&D; races and classes, or play as an origin character with a handcrafted background on underpowered PCs and Macs. Adventure, loot, battle and romance as you journey through the Forgotten Realms and beyond from mobile devices . Play alone and select companions carefully, or as a party of up to four in multiplayer.\n\nLevel up to the GeForce NOW Ultimate membership to experience the power of an RTX 4080 in the cloud and all of its benefits, including up to 4K 120 frames per second gameplay on PC and Mac, and ultrawide resolution support for a truly immersive experience.\n\nTo celebrate the third anniversary of GeForce NOW, members can now check their accounts to make sure they received the gift of free Dying Light 2 rewards .\n\nClaim a new in-game outfit dubbed \u201cPost-Apo,\u201d complete with a Rough Duster, Bleak Pants, Well-Worn Boots, Tattered Leather Gauntlets, Dystopian Mask and Spiked Bracers to scavenge around and parkour in. Members who upgrade to Ultimate and Priority memberships can claim extra loot with this haul, including the Patchy Paraglider and Scrap Slicer weapon.\n\nVisit the GeForce NOW Rewards portal to start receiving special offers and in-game goodies.\n\nBuckle up for three more games supported in the GeForce NOW library this week.\n\nRecipe for Disaster (Free on Epic Games , Feb. 9-16)\n\nBaldur\u2019s Gate 3 ( Steam )\n\nInside the Backrooms ( Steam)\n\nMembers continue to celebrate #3YearsOfGFN on our social channels, sharing their favorite cloud gaming devices: We asked which device you played GeForce NOW on the most and you did not disappoint. nnn From 3D printed @Razer Kishi setups to @ASUS laptops, here are our nnn n favorite submissions to celebrate #3YearsOfGFN ! n pic.twitter.com/MGpLJ1E81N \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) February 1, 2023\n\nFollow #3YearsOfGFN on Twitter and Facebook all month long and check out this week\u2019s question. What's the most beautiful place you've visited in-game on GFN? n\n\nReply with a screenshot or game capture for a chance to win a NVIDIA G-SYNC @MSIGaming MEG381CQR Ultrawide Gaming Monitor & celebrate #3YearsOfGFN ! nn (Perfect to use with an Ultimate membership n) pic.twitter.com/xN5zs4XtFp \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) February 8, 2023", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMDkvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktZmViLTkv.pdf"}, {"question": "When was the article published?", "gt_answer": "The article was published on Thursday, Feb 9, 2023.", "gt_context": "Original URL: https://blogs.nvidia.com/blog/2023/02/09/geforce-now-thursday-feb-9/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMDkvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktZmViLTkv.pdf"}, {"question": "Who will deliver the opening keynote at GTC 2023?", "gt_answer": "Jensen Huang", "gt_context": "NVIDIA GTC 2023 to Feature Latest Advances in AI Computing Systems, Generative AI, Industrial Metaverse, Robotics; Keynote by Jensen Huang; Talks by OpenAI, DeepMind Founders\n\nVirtual Conference to Offer 650+ Sessions From Leaders in Technology, Business, Academia and Government, March 20-23\n\nNVIDIA today announced that company founder and CEO Jensen Huang will deliver the opening keynote at GTC 2023, covering the latest advancements in generative AI, the metaverse, large language models, robotics, cloud computing and more.\n\nMore than 250,000 people are expected to register for the four-day event, which will include 650+ sessions from researchers, developers and industry leaders in virtually every computing domain. GTC will also feature a fireside chat with Huang and OpenAI co-founder Ilya Sutskever, plus talks by DeepMind\u2019s Demis Hassabis, Stability AI\u2019s Emad Mostaque and many others.\n\nRegistration is free and open now at www.nvidia.com/gtc.\n\n\u201cThis is the most extraordinary moment we have witnessed in the history of AI,\u201d Huang said. \u201cNew AI technologies and rapidly spreading adoption are transforming science and industry, and opening new frontiers for thousands of new companies. This will be our most important GTC yet.\u201d\n\nHuang\u2019s keynote will be livestreamed on Tuesday, March 21, at 8 a.m. Pacific time and available on demand afterward. Registration is not required to view the keynote. Closed captioning in English will be available for the keynote and sessions.\n\nOther notable speakers include:\n\nChike Aguh, chief innovation officer, U.S. Department of Labor Soumith Chintala, researcher, Meta and creator of PyTorch Paul Debevec, chief research officer, Netflix Eyeline Studios Kathryn Guarini, CIO, IBM Corporation Tony Hemmelgarn, CEO, Siemens Digital Industries Software Sergey Levine, associate professor, UC Berkeley Thomas Schulthess, director, Swiss National Supercomputing Centre, ETH Zurich Kathy Smith, artist and professor, USC Ashok Srivastava, chief data officer, Intuit\n\nAmong other organizations participating are: Amazon Robotics, AWS, ByteDance, Dell Technologies, Deloitte, Epic Games, Ford Motor Company, Fraunhofer, General Motors, Google, HPE, Jaguar Land Rover, Lenovo, Lockheed Martin, Microsoft, MIT, Oracle Cloud, Pixar, Samsung, Shell, TSMC, United States Space Force and VMware.\n\nSpotlight on Research GTC will also include panels from the industry\u2019s top researchers, a talk by NVIDIA Chief Scientist Bill Dally, and 65+ sessions focused on generative AI. Huang\u2019s fireside chat with Sutskever, chief scientist and co-founder of OpenAI, will air on March 22, at 9 a.m. Pacific time, and on demand afterward.\n\nNotable sessions include:", "document": "R1RDIDIwMjMgQW5ub3VuY2VtZW50IDIvMjEvMjMucGRm.pdf"}, {"question": "What sessions are available for startups at GTC?", "gt_answer": "Sessions for startups at GTC include Essential Tech for GenAI Startups, Emerging Venture Themes for 2023 - Generative AI, and Riding the Wave - Generative AI for Startups.", "gt_context": "Notable sessions include:\n\nA fireside chat with Scott Belsky, chief product officer at Adobe, and Bryan Catanzaro, vice president of applied research at NVIDIA, on how generative AI is transforming the creative process. A conversation with NVIDIA\u2019s automotive team on how generative AI is revolutionizing AV development. Numerous talks on demystifying generative AI for a broad audience. A discussion on AI\u2019s influence on art with AI artist Refik Anadol, The Museum of Modern Art curators Paola Antonelli and Michelle Kuo, and NVIDIA Vice President of Omniverse Richard Kerris. A panel from robotics experts on how AI can advance real-world deployments of robots using simulation. Multiple sessions on how generative AI can be used across industries from content creation to graphics to drug discovery by Amgen, Autodesk, AWS, Evozyne, General Motors, Icahn School of Medicine at Mount Sinai, London College of Fashion, Microsoft Research and SK Telecom.\n\nLearning and Career Development Opportunities GTC provides participants at all career stages with learning opportunities. Registrants can sign up for full-day, instructor-led, hands-on technical workshops offered by the NVIDIA Deep Learning Institute (DLI) at discounted pricing. Twenty-eight workshops will be offered in multiple languages, including Korean, Japanese and Chinese.\n\nAs part of NVIDIA\u2019s efforts to increase AI workforce readiness and create a more inclusive AI ecosystem, GTC will offer training and sessions including Change the World With a Career in AI, Fundamentals of Deep Learning and Blueprint to Becoming an Effective Student Researcher for early career and student participants. Additionally, NVIDIA is providing credits for DLI workshops at GTC to minority-serving institutions like HBCUs, HSIs and community colleges.\n\nSessions for Startups GTC offers startups the opportunity to learn directly from experts in AI, data science and machine learning. NVIDIA Inception, a global program designed to nurture cutting-edge startups with 13,000+ members, will host tracks aimed at helping startups grow their businesses and gain industry knowledge. The NVIDIA Venture Capital Alliance program, which has 400 VC firms as members, will host sessions designed for investors.\n\nSessions for startups include Essential Tech for GenAI Startups, Emerging Venture Themes for 2023 - Generative AI and Riding the Wave - Generative AI for Startups.\n\nNVIDIA Financial Analyst Q&A NVIDIA will hold a Q&A session with financial analysts following the keynote at 10 a.m. Pacific time. The webcast will be available at investor.nvidia.com.", "document": "R1RDIDIwMjMgQW5ub3VuY2VtZW50IDIvMjEvMjMucGRm.pdf"}, {"question": "What is NVIDIA's main focus as a company?", "gt_answer": "NVIDIA is a pioneer in accelerated computing and has a focus on data-center-scale offerings that are reshaping industry.", "gt_context": "About NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: the timing, size, themes, sessions, speakers, participants, availability and impact of GTC, including the GTC keynote; new AI technologies and rapidly spreading adoption transforming science and industry, and opening new frontiers for thousands of new companies; this GTC being our most important yet; the learning and development opportunities at GTC; and the timing and availability of the financial analyst Q&A following the GTC keynote are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.\n\nStephanie Matthew Corporate Communications NVIDIA +1-408-646-3359 smatthew@nvidia.com", "document": "R1RDIDIwMjMgQW5ub3VuY2VtZW50IDIvMjEvMjMucGRm.pdf"}, {"question": "What GPUs set world records in AI inference benchmarks?", "gt_answer": "The NVIDIA H100 Tensor Core GPUs set world records in AI inference benchmarks.", "gt_context": "NVIDIA Hopper Sweeps AI Inference Benchmarks in MLPerf Debut In industry-standard tests of AI inference, NVIDIA H100 GPUs set world records, A100 GPUs showed leadership in mainstream performance and Jetson AGX Orin led in edge computing.\n\nAuthor: Dave Salvator\n\nIn their debut on the MLPerf industry-standard AI benchmarks, NVIDIA H100 Tensor Core GPUs set world records in inference on all workloads, delivering up to 4.5x more performance than previous-generation GPUs.\n\nThe results demonstrate that Hopper is the premium choice for users who demand utmost performance on advanced AI models.\n\nAdditionally, NVIDIA A100 Tensor Core GPUs and the NVIDIA Jetson AGX Orin module for AI-powered robotics continued to deliver overall leadership inference performance across all MLPerf tests: image and speech recognition, natural language processing and recommender systems.\n\nThe H100, aka Hopper, raised the bar in per-accelerator performance across all six neural networks in the round. It demonstrated leadership in both throughput and speed in separate server and offline scenarios.\n\nThe NVIDIA Hopper architecture delivered up to 4.5x more performance than NVIDIA Ampere architecture GPUs, which continue to provide overall leadership in MLPerf results.\n\nThanks in part to its Transformer Engine , Hopper excelled on the popular BERT model for natural language processing. It\u2019s among the largest and most performance-hungry of the MLPerf AI models.\n\nThese inference benchmarks mark the first public demonstration of H100 GPUs, which will be available later this year. The H100 GPUs will participate in future MLPerf rounds for training.\n\nNVIDIA A100 GPUs, available today from major cloud service providers and systems manufacturers, continued to show overall leadership in mainstream performance on AI inference in the latest tests.\n\nA100 GPUs won more tests than any submission in data center and edge computing categories and scenarios. In June , the A100 also delivered overall leadership in MLPerf training benchmarks, demonstrating its abilities across the AI workflow.\n\nSince their July 2020 debut on MLPerf, A100 GPUs have advanced their performance by 6x , thanks to continuous improvements in NVIDIA AI software.\n\nNVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing.\n\nThe ability of NVIDIA GPUs to deliver leadership performance on all major AI models makes users the real winners. Their real-world applications typically employ many neural networks of different kinds.\n\nFor example, an AI application may need to understand a user\u2019s spoken request, classify an image, make a recommendation and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMDgvaG9wcGVyLW1scGVyZi1pbmZlcmVuY2Uv.pdf"}, {"question": "Which AI workloads and scenarios are covered by MLPerf benchmarks?", "gt_answer": "The MLPerf benchmarks cover computer vision, natural language processing, recommendation systems, speech recognition, and more.", "gt_context": "The MLPerf benchmarks cover these and other popular AI workloads and scenarios \u2014 computer vision, natural language processing, recommendation systems, speech recognition and more. The tests ensure users will get performance that\u2019s dependable and flexible to deploy.\n\nUsers rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Amazon, Arm, Baidu, Google, Harvard, Intel, Meta, Microsoft, Stanford and the University of Toronto.\n\nIn edge computing, NVIDIA Orin ran every MLPerf benchmark, winning more tests than any other low-power system-on-a-chip. And it showed up to a 50% gain in energy efficiency compared to its debut on MLPerf in April.\n\nIn the previous round, Orin ran up to 5x faster than the prior-generation Jetson AGX Xavier module, while delivering an average of 2x better energy efficiency.\n\nOrin integrates into a single chip an NVIDIA Ampere architecture GPU and a cluster of powerful Arm CPU cores. It\u2019s available today in the NVIDIA Jetson AGX Orin developer kit and production modules for robotics and autonomous systems, and supports the full NVIDIA AI software stack, including platforms for autonomous vehicles ( NVIDIA Hyperion ), medical devices ( Clara Holoscan ) and robotics ( Isaac ).\n\nThe MLPerf results show NVIDIA AI is backed by the industry\u2019s broadest ecosystem in machine learning.\n\nMore than 70 submissions in this round ran on the NVIDIA platform. For example, Microsoft Azure submitted results running NVIDIA AI on its cloud services.\n\nIn addition, 19 NVIDIA-Certified Systems appeared in this round from 10 systems makers, including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.\n\nTheir work shows users can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.\n\nNVIDIA partners participate in MLPerf because they know it\u2019s a valuable tool for customers evaluating AI platforms and vendors. Results in the latest round demonstrate that the performance they deliver to users today will grow with the NVIDIA platform.\n\nAll the software used for these tests is available from the MLPerf repository, so anyone can get these world-class results. Optimizations are continuously folded into containers available on NGC , NVIDIA\u2019s catalog for GPU-accelerated software. That\u2019s where you\u2019ll also find NVIDIA TensorRT , used by every submission in this round to optimize AI inference.\n\nRead our Technical Blog for a deeper dive into the technology fueling NVIDIA\u2019s MLPerf performance .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/08/hopper-mlperf-inference/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMDgvaG9wcGVyLW1scGVyZi1pbmZlcmVuY2Uv.pdf"}, {"question": "What software does Solomon Jagwe use for his work?", "gt_answer": "Solomon Jagwe uses Reallusion iClone and Omniverse for his work.", "gt_context": "Into the Omniverse: Reallusion Elevates Character Animation Workflows With Two-Way Live Sync and OpenUSD Support\n\nUpdates to Reallusion iClone boost productivity for 3D creators, offering real-time previews and a bidirectional workflow with Omniverse.\n\nAuthor: Dane Johnston\n\nEditor\u2019s note: This post is part of Into the Omniverse , a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse .\n\nWhether animating a single 3D character or generating a group of them for industrial digitalization, creators and developers who use the popular Reallusion software can boost their workflows with the latest update to the iClone Omniverse Connector released this month.\n\nThe upgrade enables seamless collaboration and expands creative possibilities for creators using NVIDIA Omniverse , a development platform for connecting and building OpenUSD-based tools and applications.\n\nNew features include real-time synchronization of projects, as well as enhanced import functionality for the Universal Scene Description framework, known as OpenUSD, which makes work between iClone and Omniverse quicker, smoother and more efficient. The update also comes with bug fixes and improvements.\n\nCreators across the world are using Reallusion iClone, a real-time 3D animation software, to bring their characters to life.\n\nSolomon Jagwe \u2014 a 3D artist, animator and award-winning film director \u2014 uses Reallusion iClone and Omniverse for his work, which often focuses on environmental themes.\n\nJagwe, who grew up in East Africa, recalls fond childhood memories drawing the creatures he\u2019d see when he ventured into the countryside with his brother. Even now, much of his 3D work begins with a simple sketch using pen and paper.\n\nThe artist said he always strives to create art that makes a difference.\n\nFor example, Jagwe created Adventures of Nkoza and Nankya , a video series for educating people of all ages on Ugandan culture. He modeled the sets for the series in Autodesk 3ds Max and Autodesk Maya, animated in Reallusion iClone and composed in Omniverse.\n\n\u201cWith the iClone Connector for Omniverse, I can easily render my iClone animations in Omniverse and take advantage of the iClone animation tools in combination with the Omniverse Audio2Face generative AI capabilities,\u201d Jagwe said.\n\nJagwe\u2019s entire creative pipeline is accelerated by USD, which acts as a common language between 3D applications and enables sharing full scenes across content-creation tools.\n\n\u201cOpenUSD makes it so much easier to transport all the textures and characters together in one place for animation,\u201d Jagwe said. The artist added that he hopes his work inspires other indie filmmakers to bring their story ideas to life using iClone and Omniverse.\n\nA scene from Jagwe\u2019s educational series, \u201cAdventures of Nkoza and Nankya.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMTYvb3BlbnVzZC1zdXBwb3J0LWZvci1lbGV2YXRlZC1hbmltYXRpb24td29ya2Zsb3dzLw==.pdf"}, {"question": "What is the purpose of the iClone Connector for Omniverse?", "gt_answer": "The purpose of the iClone Connector for Omniverse is to provide powerful integrations between the two platforms, allowing users to synchronize their projects in real time and enhance their 3D character animation pipelines.", "gt_context": "A scene from Jagwe\u2019s educational series, \u201cAdventures of Nkoza and Nankya.\u201d\n\nAnother indie filmmaker, Benjamin Sokomba Dazhi , aka Benny Dee, has also mastered the art of animation. He\u2019s landed roles as head animator for the film The Legend of Oronpoto as well as creator and director of the Cartoon Network Africa Dance Challenge .\n\nDazhi uses Omniverse with Reallusion\u2019s iClone and Character Creator to supercharge his artistic workflow.\n\n\u201cThe main challenges I faced when trying to meet deadlines were long render times and difficulties with software compatibility, but using an Omniverse Connector for Reallusion\u2019s iClone app has been game-changing for my workflow,\u201d he said.\n\nA scene from one of Dhazi\u2019s animated music videos.\n\nSeveral other Omniverse community members recently joined a livestream to share their workflows using Reallusion and Omniverse. Watch the stream on demand:\n\nThe updated Reallusion iClone Connector for Omniverse offers powerful integrations between the two platforms.\n\nUsers can now seamlessly synchronize their projects in real time thanks to new bidirectional live-sync capabilities. This means changes made in either iClone or Omniverse can be automatically reflected back to the other. Such bidirectional synchronization can be applied to animation-related changes for characters, such as skeletal and morph animation.\n\nThe iClone Connector also enables enhanced USD import capabilities. Users can now import static meshes, cameras and lights from Omniverse directly into iClone. This improved functionality includes a filter that allows importing assets with optimal efficiency based on their types.\n\nSee how designers can now preview Omniverse renders in real time while animating in iClone, as they enjoy seamless two-way USD data transfer:\n\nNext week, an Omniverse community livestream will feature Reallusion Vice President John Martin, who\u2019ll share all the ways the iClone Omniverse Connector can advance 3D character animation pipelines.\n\nWatch NVIDIA founder and CEO Jensen Huang\u2019s keynote address at SIGGRAPH on demand to learn about the latest breakthroughs in graphics, research, OpenUSD and AI.\n\nLike Reallusion, learn how anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools.\n\nShare your Reallusion and Omniverse work as part of the latest community challenge, #StartToFinish. Use the hashtag to submit a screenshot of a project featuring both its beginning and ending stages for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMTYvb3BlbnVzZC1zdXBwb3J0LWZvci1lbGV2YXRlZC1hbmltYXRpb24td29ya2Zsb3dzLw==.pdf"}, {"question": "Where can I download the NVIDIA Omniverse standard license?", "gt_answer": "You can download the NVIDIA Omniverse standard license for free.", "gt_context": "Get started with NVIDIA Omniverse by downloading the standard license free , or learn how Omniverse Enterprise can connect your team . Developers can check out these Omniverse resources to begin building on the platform. Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram , LinkedIn , Medium , Threads and Twitter . For more, check out our forums , Discord server , Twitch and YouTube channels.\n\nFeatured image courtesy of Reallusion.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/16/openusd-support-for-elevated-animation-workflows/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMTYvb3BlbnVzZC1zdXBwb3J0LWZvci1lbGV2YXRlZC1hbmltYXRpb24td29ya2Zsb3dzLw==.pdf"}, {"question": "What is the purpose of the partnership between Siemens and NVIDIA?", "gt_answer": "The purpose of the partnership is to enable the industrial metaverse and increase the use of AI-driven digital twin technology in the manufacturing industry.", "gt_context": "Siemens and NVIDIA to Enable Industrial Metaverse\n\nPartnership to transform the manufacturing industry with immersive experiences across the lifecycle from design through operation Companies will connect NVIDIA Omniverse and Siemens Xcelerator platforms to enable full-fidelity digital twins and connect software-defined AI systems from edge to cloud\n\nSiemens, a leader in industrial automation and software, infrastructure, building technology and transportation, and NVIDIA, a pioneer in accelerated graphics and artificial intelligence (AI), today announced an expansion of their partnership to enable the industrial metaverse and increase use of AI-driven digital twin technology that will help bring industrial automation to a new level. As a first step in this collaboration, the companies plan to connect Siemens Xcelerator, the open digital business platform, and NVIDIA Omniverse\u2122, a platform for 3D design and collaboration. This will enable an industrial metaverse with physics-based digital models from Siemens and real-time AI from NVIDIA in which companies make decisions faster and with increased confidence.\n\nThe addition of Omniverse to the open Siemens Xcelerator partner ecosystem will accelerate the use of digital twins that can deliver productivity and process improvements across the production and product lifecycles. Companies of all sizes will be able to employ digital twins with real-time performance data; create innovative industrial IoT solutions; leverage actionable insights from analytics at the edge or in the cloud; and tackle the engineering challenges of tomorrow by making visually rich, immersive simulations more accessible.\n\n\u201cPhotorealistic, physics-based digital twins embedded in the industrial metaverse offer enormous potential to transform our economies and industries by providing a virtual world where people can interact and collaborate to solve real-world problems. Through this partnership, we will make the industrial metaverse a reality for companies of all sizes,\u201d said Roland Busch, President and Chief Executive Officer, Siemens AG. \u201cFor over a decade, our digital twin technology has been helping customers across all industries to boost their productivity and today offer the industry\u2019s most comprehensive digital twin. When Siemens Xcelerator is connected to Omniverse, we will enable a real-time, immersive metaverse that connects hardware and software, from the edge to the cloud with rich data from Siemens\u2019 software and solutions.\u201d", "document": "U2llbWVucyBOVklESUEgNi8yOS8yMi5wZGY=.pdf"}, {"question": "What is Siemens Xcelerator?", "gt_answer": "Siemens Xcelerator is a platform that connects mechanical, electrical, and software domains across product and production processes, enabling the convergence of IT and OT.", "gt_context": "\u201cSiemens and NVIDIA share a common vision that the industrial metaverse will drive digital transformation. This is just the first step in our joint effort to make this vision real for our customers and all parts of the global manufacturing industry,\u201d said Jensen Huang, founder and CEO, NVIDIA. \u201cThe connection to Siemens Xcelerator will open NVIDIA\u2019s Omniverse and AI ecosystem to a whole new world of industrial automation that is built using Siemens\u2019 mechanical, electrical, software, IoT and edge solutions.\u201d\n\nThis partnership brings together complementary technologies and ecosystems to realize the industrial metaverse. Siemens is uniquely positioned at the intersections of the real and digital world, information technology and operational technology. The Siemens Xcelerator platform connects mechanical, electrical and software domains across the product and production processes and enables the convergence of IT and OT.\n\nNVIDIA Omniverse is an AI-enabled, physically simulated and industrial-scale virtual-world engine that enables for the first time full-fidelity live digital twins. NVIDIA AI, used by more than 25,000 companies worldwide, is the world\u2019s most popular AI platform and the intelligence engine of Omniverse in the cloud and autonomous systems at the edge. NVIDIA Omniverse and AI are ideal computation engines to represent the comprehensive digital twin from Siemens Xcelerator.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics and ignited the era of modern AI. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nSiemens AG (Berlin and Munich) is a technology company focused on industry, infrastructure, transport, and healthcare. From more resource-efficient factories, resilient supply chains, and smarter buildings and grids, to cleaner and more comfortable transportation as well as advanced healthcare, the company creates technology with purpose adding real value for customers. By combining the real and the digital worlds, Siemens empowers its customers to transform their industries and markets, helping them to transform the everyday for billions of people. Siemens also owns a majority stake in the publicly listed company Siemens Healthineers, a globally leading medical technology provider shaping the future of healthcare. In addition, Siemens holds a minority stake in Siemens Energy, a global leader in the transmission and generation of electrical power.\n\nIn fiscal 2021, which ended on September 30, 2021, the Siemens Group generated revenue of \u20ac62.3 billion and net income of \u20ac6.7 billion. As of September 30, 2021, the company had around 303,000 employees worldwide. Further information is available on the Internet at www.siemens.com.", "document": "U2llbWVucyBOVklESUEgNi8yOS8yMi5wZGY=.pdf"}, {"question": "What are some important factors that could cause actual results to differ materially?", "gt_answer": "Important factors that could cause actual results to differ materially include: global economic conditions; NVIDIA\u2019s reliance on third parties for manufacturing; impact of technological development and competition; changes in consumer preferences or demands, among others.", "gt_context": "Certain statements in this press release including, but not limited to, statements as to: the benefits, performance, impact, and abilities of NVIDIA\u2019s products and technologies, including NVIDIA Omniverse and NVIDIA AI; the benefits and impact of the partnership between Siemens and NVIDIA; and the industrial metaverse driving digital transformation are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; NVIDIA\u2019s reliance on third parties to manufacture, assemble, package and test NVIDIA\u2019s products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIA\u2019s existing products and technologies; market acceptance of NVIDIA\u2019s products or NVIDIA\u2019s partners\u2019 products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIA\u2019s products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on NVIDIA\u2019s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\nNote: A list of relevant Siemens trademarks can be found here. NVIDIA, the NVIDIA logo and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation and/or Mellanox Technologies in the U.S. and other countries. Other trademarks belong to their respective owners.\n\nNoah Cole Siemens noah.cole@siemens.com Kasia Johnston +1-415-813-8859 kasiaj@nvidia.com Lexi Hatziharalambous lexih@nvidia.com", "document": "U2llbWVucyBOVklESUEgNi8yOS8yMi5wZGY=.pdf"}, {"question": "What is the purpose of the new content engine developed by NVIDIA and WPP?", "gt_answer": "The purpose of the new content engine is to enable creative teams to produce high-quality commercial content faster, more efficiently, and at scale while staying aligned with a client's brand.", "gt_context": "WPP Partners With NVIDIA to Build Generative AI- Enabled Content Engine for Digital Advertising\n\nGroundbreaking Engine Built on NVIDIA AI and Omniverse Connects Creative 3D and AI Tools From Leading Software Makers to Revolutionize Brand Content, Experiences at Scale\n\nCOMPUTEX\u2014NVIDIA and WPP today announced they are developing a content engine that harnesses NVIDIA Omniverse\u2122 and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client\u2019s brand.\n\nThe new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP\u2019s artists and designers integrate 3D content creation with generative AI. This enables their clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company\u2019s brand identity, products and logos.\n\nNVIDIA founder and CEO Jensen Huang unveiled the engine in a demo during his COMPUTEX keynote address, illustrating how clients can work with teams at WPP, the world\u2019s largest marketing services organization, to make large volumes of brand advertising content such as images or videos and experiences like 3D product configurators more tailored and immersive.\n\n\u201cThe world\u2019s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,\u201d Huang said. \u201cWith Omniverse Cloud and generative AI tools, WPP is giving brands the ability to build and deploy product experiences and compelling content at a level of realism and scale never possible before.\u201d\n\n\u201cGenerative AI is changing the world of marketing at incredible speed,\u201d said Mark Read, CEO of WPP. \u201cOur partnership with NVIDIA gives WPP a unique competitive advantage through an AI solution that is available to clients nowhere else in the market today. This new technology will transform the way that brands create content for commercial use, and cements WPP\u2019s position as the industry leader in the creative application of AI for the world\u2019s top brands.\u201d\n\nAn Engine for Creativity The new content engine has at its foundation Omniverse Cloud \u2014 a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe\u2019s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.", "document": "V1BQIEVuZ2luZSA1LzI4LzIzLnBkZg==.pdf"}, {"question": "What tools and content does WPP use with generative AI?", "gt_answer": "WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images.", "gt_context": "WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.\n\nWith the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.\n\nIn addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.\n\nThe partnership with NVIDIA builds on WPP\u2019s existing leadership position in emerging technologies and generative AI, with award-winning campaigns for major clients around the world.\n\nThe new content engine will soon be available exclusively to WPP\u2019s clients around the world.\n\nAbout WPP WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients and communities. For more information, visit www.wpp.com.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the industrial metaverse. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance,", "document": "V1BQIEVuZ2luZSA1LzI4LzIzLnBkZg==.pdf"}, {"question": "What are some of the collaborations NVIDIA has?", "gt_answer": "NVIDIA has collaborations with WPP, Adobe, and Getty Images.", "gt_context": "features and availability of our products, services and technologies, including NVIDIA Omniverse Cloud, NVIDIA AI, NVIDIA Omniverse, Picasso and NVIDIA Graphics Delivery Network; our collaborations with WPP, Adobe and Getty Images, and the benefits, impact, features and availability thereof; and the world\u2019s industries, including the digital advertising industry, racing to realize the benefits of AI are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.\n\nKasia Johnston +1-415-813-8859 kasiaj@nvidia.com Louise Lacourarie WPP +44 (0)20 7282 4600 +44 7741 360931 louise.lacourarie@wpp.com Niken Wresniwiro WPP +44 (0)20 7282 4600 +44 (0)7876 005 489 niken.wresniwiro@wpp.com", "document": "V1BQIEVuZ2luZSA1LzI4LzIzLnBkZg==.pdf"}, {"question": "What is Tarteel's mission?", "gt_answer": "Tarteel's mission is to strengthen the relationship Muslims have with the Quran.", "gt_context": "How Tarteel Uses AI to Help Arabic Learners Perfect Their Pronunciation\n\nAuthor: Brian Caulfield\n\nThere are some 1.8 billion Muslims, but only 16% or so of them speak Arabic, the language of the Quran.\n\nThis is in part due to the fact that many Muslims struggle to find qualified instructors to give them feedback on their Quran recitation.\n\nEnter today\u2019s guest and his company Tarteel, a member of the NVIDIA Inception program for startups.\n\nTarteel was founded with the mission of strengthening the relationship Muslims have with the Quran.\n\nThe company is accomplishing this with a fusion of Islamic principles and cutting-edge technology.\n\nAI Podcast host Noah Kravitz spoke with Tarteel CEO Anas Abou Allaban, to learn more.\n\nArtem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs\n\nIt may seem intuitive that AI and deep learning can speed up workflows \u2014 including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.\n\nLending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic\n\nIs it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.\n\nWild Things: 3D Reconstructions of Endangered Species With NVIDIA\u2019s Sifei Liu\n\nStudying endangered species can be difficult, as they\u2019re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.\n\nYou can now listen to the AI Podcast through Amazon Music .\n\nAlso get the AI Podcast through iTunes , Google Podcasts , Google Play , Castbox , DoggCatcher, Overcast , PlayerFM , Pocket Casts, Podbay , PodBean , PodCruncher, PodKicker, Soundcloud , Spotify , Stitcher and TuneIn .\n\nMake the AI Podcast better: Have a few minutes to spare? Fill out our listener survey .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/10/19/ai-tarteel/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMTkvYWktdGFydGVlbC8=.pdf"}, {"question": "What does NVIDIA DLSS 3.5 add?", "gt_answer": "NVIDIA DLSS 3.5 adds Ray Reconstruction, which improves ray-traced image quality for all GeForce RTX GPUs.", "gt_context": "Coming This Fall: NVIDIA DLSS 3.5 for Chaos Vantage, D5 Render, Omniverse and Popular Game Titles Half-Life 2 RTX: An RTX Remix Project\u2019 announced \u2014 plus, gaming-inspired 3D scenes from digital artist Diyor Makhmudov, all this week \u2018In the NVIDIA Studio.\u2019\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We\u2019re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.\n\nGamescom , the biggest gaming event of the year, kicks off tomorrow in Cologne, Germany, but gamers and content creators can find some of the latest innovations, tools and AI-powered tech this week In the NVIDIA Studio .\n\nOn the eve of the show\u2019s official opening, NVIDIA announced NVIDIA DLSS 3.5 featuring Ray Reconstruction \u2014 a new neural rendering AI model that creates more beautiful and realistic ray-traced visuals than traditional rendering methods \u2014 for real-time 3D creative apps and games.\n\nNVIDIA RTX Remix , a free modding platform built on NVIDIA Omniverse and available now, gives people the tools to create and share #RTXON mods for classic games. We also announced Half-Life 2 RTX: An RTX Remix Project , a community remaster project of Valve\u2019s Half-Life 2 , one of the highest-rated games of all time .\n\nThis week\u2019s In the NVIDIA Studio installment also features digital artist Diyor Makhmudov\u2019s 3D work, inspired by the extraordinary gaming franchise The Witcher .\n\nReallusion software released an update to the iClone Omniverse Connector , including real-time synchronization of projects and enhanced import functionality for OpenUSD, enabling quicker, more efficient workflows. Learn more in the latest edition of the Into the Omniverse series.\n\nFinally, calling all video editors to sign up for the premiere DaVinci Resolve event \u2014 ResolveCon \u2014 in Portland, Oregon, from Aug. 25-27. In-person attendees can win giveaways, including new GeForce RTX GPUs, while virtual attendees can view tutorials livestreamed by In the NVIDIA Studio artist Casey Faris.\n\nNVIDIA DLSS 3.5 adds Ray Reconstruction, which improves ray-traced image quality for all GeForce RTX GPUs by replacing hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays.\n\nSeeing is believing \u2014 watch the Tech Talk with NVIDIA Vice President of Applied Deep Learning Research Bryan Catanzaro to learn how DLSS 3.5 works.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjIvZGxzcy1haS1ydHgtcmVtaXgtaGFsZi1saWZlLWQ1LXJlbmRlci1jaGFvcy12YW50YWdlLw==.pdf"}, {"question": "What has inspired Diyorbek Makhmudov to create 3D worlds?", "gt_answer": "The Witcher franchise", "gt_context": "Creative apps with ray-traced renderers face a wide variety of content that is difficult for traditional denoisers to handle, as they require hand-tuning for every scene. As a result, content previews return suboptimal image quality. With DLSS 3.5, the AI neural network recognizes a wide variety of scenes, producing high-quality images during preview and before committing hours to a final render.\n\nD5 Render and Chaos Vantage, two popular professional-grade 3D apps for architects and designers, feature real-time preview modes with ray tracing. With DLSS 3.5, the AI neural network replaces the denoisers, inferring and producing higher-quality previews while building and iterating.\n\nPopular creative apps Chaos Vantage, D5 Render and NVIDIA Omniverse, as well as popular gaming titles Alan Wake 2 , Cyberpunk 2077 , Cyberpunk 2077: Phantom Liberty and Portal with RTX , are all adding support for NVIDIA DLSS 3.5 this fall.\n\nDevelopers will be able to seamlessly integrate DLSS 3.5 with the new Streamline SDK coming soon. Learn more about DLSS 3.5 .\n\nHalf-Life 2 RTX: An RTX Remix Project is being developed by four of Half-Life 2 \u2019s top mod teams, now known as Orbifold Studios.\n\nUsing the latest version of RTX Remix, Orbifold Studios is rebuilding materials with physically based rendering properties, adding extra geometric detail with Valve\u2019s Hammer editor and using the full range of NVIDIA technologies, including NVIDIA DLSS , NVIDIA RTX IO and NVIDIA Reflex , to breathe new life into the critically acclaimed title.\n\nAs with Portal with RTX , a high-fidelity reimagining of Valve\u2019s timeless classic, and Portal: Prelude RTX , built by community modders, nearly every asset in Half-Life 2 RTX: An RTX Remix Project is being reconstructed in high fidelity and with full ray tracing (otherwise known as path tracing , enabling advanced rendering techniques. Compared to the original, some assets feature 20x the geometric detail.\n\nHalf-Life 2 RTX: An RTX Remix Project is early in development and is a community effort looking to galvanize talented modders and artists everywhere. To join the project, apply via the Orbifold Studios website .\n\nAlready building 3D scenes in immaculate detail, 19-year-old digital creator and 3D lighting artist Diyorbek Makhmudov has the savvy skills of an industry veteran and a bright future ahead.\n\nMakhmudov has always had a deep-rooted passion for gaming, gaining inspiration from the 3D worlds of his favorite games. Most notably, The Witcher franchise has fueled him to create 3D worlds that showcase his own signature look and feel.\n\nUnlike many content creators featured In the NVIDIA Studio , Makhmudov doesn\u2019t like to bring his own life experience into the creative process, enjoying the escapism offered by world-building.\n\n\u201cI like to immerse myself in another universe,\u201d said Makhmudov. \u201cI don\u2019t like to express my feelings, thoughts or emotions in my creations.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjIvZGxzcy1haS1ydHgtcmVtaXgtaGFsZi1saWZlLWQ1LXJlbmRlci1jaGFvcy12YW50YWdlLw==.pdf"}, {"question": "What 3D app does Makhmudov prefer to use?", "gt_answer": "Makhmudov prefers to use Cinema 4D as his preferred 3D app.", "gt_context": "Makhmudov follows standard 3D creative workflow practices: gathering reference material, prepping materials, shaping environments and tinkering with materials, textures and colors. But it\u2019s in 3D creation where he really shines.\n\nMakhmudov uses his preferred 3D app \u2014 Cinema 4D \u2014 to achieve smooth interactivity while working with complex 3D models thanks to the NVIDIA GPU-accelerated viewport. It\u2019s powered by a GeForce RTX 3090 graphics card, which offers considerable increases in efficiency while fueling creativity.\n\nIn the video above, Makhmudov is able to move within the scene, tinkering while the scene renders in real time.\n\nCinema 4D also supports several popular GPU-accelerated renderers such as Chaos V-Ray, OTOY OctaneRender and Maxon\u2019s Redshift. This flexibility allows Makhmudov to use whichever best suits his needs.\n\n\u201cRedshift is fast, has a good light-linking system and I have almost full control over everything,\u201d said Makhmudov. He prefers OctaneRender for exporting ultra-realistic renders quickly. The built-in Cinema 4D render is also a speedy option. The only thing he can\u2019t do is work on a CPU alone because, to quote Makhmudov, \u201cIt\u2019s very slow.\u201d\n\n\u201cWhen you do personal work, it pushes you more to achieve a good result,\u201d said Makhmudov. \u201cAs a bonus, that big portfolio will be a major advantage in your job search.\u201d\n\nCheck out Makhmudov on ArtStation .\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nGet started with NVIDIA Omniverse by downloading the standard license free , or learn how Omniverse Enterprise can connect your team . Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter , and follow NVIDIA Omniverse on Instagram , Medium and Twitter .\n\nFor more, join the Omniverse community and check out the Omniverse forums , Discord server , Twitch and YouTube channels.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/22/dlss-ai-rtx-remix-half-life-d5-render-chaos-vantage/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjIvZGxzcy1haS1ydHgtcmVtaXgtaGFsZi1saWZlLWQ1LXJlbmRlci1jaGFvcy12YW50YWdlLw==.pdf"}, {"question": "What feature in Blender did CG Geek use to create detailed scenes using small amounts of data?", "gt_answer": "Geo nodes", "gt_context": "3D Artist \u2018CG Geek\u2019 Builds Massive Sci-Fi World in Record Time This Week \u2018In the NVIDIA Studio\u2019 Gain advice, tips and inspiration from the NVIDIA Studio community of artists and designers, plus join the #NewYearNewArt challenge.\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We\u2019re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.\n\n3D and animation extraordinaire CG Geek completed an ambitious design challenge this week In the NVIDIA Studio \u2014 building a massive, sci-fi-inspired 3D world in only three days. The creation of the world, dubbed The Fullness of Time , was fast-tracked by his GeForce RTX 4090 GPU .\n\nAnimator and visual effects artist CG Geek teaches aspiring artists how to get started on his popular YouTube channel . He also shares tutorials on Blender, his favorite 3D app because \u201cit\u2019s open source, and the community is always challenging one another to push limits even further,\u201d he said.\n\nTo see how far those limits could be pushed, CG Geek kicked off a timed design challenge last week as part of CES , putting together a fully rendered and animated project in only three days \u2014 powered by NVIDIA Studio technologies and his GeForce RTX 4090 GPU.\n\nThe artist polled his community on Instagram, Twitter and YouTube for a genre to use as a starting point for the project.\n\nSci-fi was the clear winner, so he envisioned what a far-future city skyline would look like. The first step was to populate the space with futuristic 3D buildings and skyscrapers.\n\nCG Geek formed simple shapes in Blender, scaling them to match the sizes of real-world buildings. He then added materials and reflections to create beautifully textured structures before adding geometry, or geo nodes, a recently added feature in Blender and a crucial aspect of 3D modeling.\n\nGeo nodes virtually eliminate procedural workflows. The traditional process of constructing objects follows a linear pattern, with one tool used after the next and each step only reversible by manual undo operations. Geo nodes allow for non-linear, non-destructive workflows and the instancing of objects to create incredibly detailed scenes using small amounts of data.\n\nCG Geek scanned objects using his iPhone to create realistic 3D models from photos. He then used Adobe Photoshop to apply detailed textures, one of 30 GPU-accelerated features made possible by his GeForce RTX 4090 GPU. The RTX-accelerated Super Resolution feature, which uses AI to upscale images with higher quality, was especially useful for exporting textures across the entire piece, CG Geek said.\n\nCG Geek added fine details like ivy and realistic wear and tear to his sci-fi buildings until he reached the desired look.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTEvaW4tdGhlLW52aWRpYS1zdHVkaW8tamFudWFyeS0xMS8=.pdf"}, {"question": "What technology did CG Geek use to create detailed, low-poly sci-fi buildings quickly?", "gt_answer": "CG Geek used Blender Cycle\u2019s RTX-accelerated, AI-powered OptiX ray tracing in conjunction with NVIDIA RTX 4090 GPU to create detailed, low-poly sci-fi buildings quickly.", "gt_context": "His process used during the challenge is covered in a tutorial on building detailed, low-poly sci-fi buildings in a matter of minutes:\n\nCG Geek\u2019s RTX 4090 GPU enables him to use Blender Cycle\u2019s RTX-accelerated, AI-powered OptiX ray tracing in the viewport for interactive, photorealistic movement within such a detailed environment. This virtually eliminates wait times, allowing him to create at the speed of his imagination.\n\nThe artist quickly and easily applied realistic textures for the sand and water as well as animations. Final renders were delivered quickly with RTX-accelerated OptiX ray tracing in Blender Cycles.\n\nIt took CG Geek just 21 hours to build the futuristic metropolis and 10 hours to render it at 4K resolution.\n\n\u201cCurrently, NVIDIA stands alone at the top of high-performance GPUs for 3D tasks like Blender,\u201d he said. \u201dFor real-time editing workflows, nothing comes close to beating the RTX 4090 GPU in speed.\u201d\n\nView more of CG Geek\u2019s work and tutorials .\n\nNine to five o\u2019clock is when people typically have a job, classes or other responsibilities. For many artists, it\u2019s from five to nine that the real creativity kicks in and inspirational juices start flowing.\n\nMore than ever, creators are turning their passions into opportunities and monetizing their side hustles. NVIDIA Studio is celebrating these entrepreneurs and helping them learn, explore and take their creative endeavors to the next level:\n\nWith technology and resources \u2014 the latest advances in GPU-acceleration and AI-powered features help get the job done faster, plus Studio Drivers add creative app optimization and reliability to systems.\n\nWith education \u2014 hundreds of select tutorials, free to the public and created by creative professionals, offer everything from quick tricks and tips to multipart, in-depth series to elevate and expand the skill sets of content creators.\n\nWith inspiration \u2014 experience the creative journeys of interdimensional Studio artists, moving storytellers and esteemed streamers across creative fields in 3D animation, video editing, graphic design, photography and more.\n\nBegin your side hustle journey with NVIDIA Studio .\n\nThe latest NVIDIA Studio community challenge has kicked off: #NewYearNewArt. A new year means new art! nnn\n\nJoin our Jan-Feb #NewYearNewArt challenge by sharing any new or relatively new art you've created for a chance to be featured on our channels! Be sure to tag #NewYearNewArt and thanks to @AOi__Pan for sharing their new art. n pic.twitter.com/lXiFLROhQh\n\n\u2014 NVIDIA Studio (@NVIDIAStudio) January 10, 2023\n\nWith a new year will come new art, and we\u2019d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off recent creations for a chance to be featured on our channels.\n\nAccess tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/11/in-the-nvidia-studio-january-11/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTEvaW4tdGhlLW52aWRpYS1zdHVkaW8tamFudWFyeS0xMS8=.pdf"}, {"question": "What are some examples of intelligent machines and autonomous robots?", "gt_answer": "Examples of intelligent machines and autonomous robots include automated distribution facilities, robots monitoring grocery stores, and robot arms working alongside humans on a production line.", "gt_context": "Top 5 Edge AI Trends to Watch in 2023\n\nAuthor: Amanda Saunders\n\nWith the state of the world under constant flux in 2022, some technology trends were put on hold while others were accelerated. Supply chain challenges, labor shortages and economic uncertainty had companies reevaluating their budgets for new technology.\n\nFor many organizations, AI is viewed as the solution to a lot of the uncertainty bringing improved efficiency, differentiation, automation and reduced cost.\n\nUntil now, AI computing has operated almost exclusively in the cloud. But increasingly diverse streams of data are being generated around the clock from sensors at the edge. These require real-time inference, which is leading more AI deployments to move to edge computing .\n\nFor airports , stores , hospitals and more, AI brings advanced efficiency, automation and even cost reduction, which is why edge AI adoption accelerated last year.\n\nIn 2023, expect to see a similarly challenging environment, which will drive the following edge AI trends.\n\nReturn on investment is always an important factor for technology purchases. But with companies looking for new ways to reduce cost and gain a competitive advantage, expect AI projects to become more common.\n\nA few years ago, AI was often viewed as experimental, but, according to research from IBM , 35% of companies today report using AI in their business, and an additional 42% report they\u2019re exploring AI. Edge AI use cases, in particular, can help increase efficiency and reduce cost, making them a compelling place to focus new investments.\n\nFor example, supermarkets and big box stores are investing heavily in AI at self-checkout machines to reduce loss from theft and human error. With solutions that can detect errors with 98% accuracy, companies can quickly see a return of investment in a matter of months.\n\nAI industrial inspection also has an immediate return, helping augment human inspectors on factory lines. Bootstrapped with synthetic data , AI can detect defects at a much higher rate and address a variety of defects that simply cannot be captured manually, resulting in more products with fewer false negative or positive detections.\n\nOften seen as a far-off use case of edge AI, the use of intelligent machines and autonomous robots is on the rise. From automated distribution facilities to meet the demands of same-day deliveries, to robots monitoring grocery stores for spills and stock outs, to robot arms working alongside humans on a production line, these intelligent machines are becoming more common.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMTkvZWRnZS1haS10cmVuZHMtMjAyMy8=.pdf"}, {"question": "What is the impact of edge computing on cybersecurity?", "gt_answer": "Edge computing, particularly when combined with AI use cases, can increase cybersecurity risk for many organizations by creating a wider attack surface outside of the traditional data center and its firewalls.", "gt_context": "According to Gartner , the use of robotics and intelligent machines is expected to grow significantly by the end of the decade. \u201cBy 2030, 80% of humans will engage with smart robots on a daily basis, due to smart robot advancements in intelligence, social interactions and human augmentation capabilities, up from less than 10% today.\u201d (Gartner, \u201cEmerging Technologies: AI Roadmap for Smart Robots \u2014 Journey to a Super Intelligent Humanoid Robot\u201d, G00761328, June 2022)\n\nFor this future to happen, one area of focus that needs attention in 2023 is aiding human and machine collaboration. Automated processes benefit from the strength and repeatable actions performed by robots, leaving humans to perform specialized and dexterous tasks that are more suited to our skills. Expect organizations to invest more in this human-machine collaboration in 2023 as a way to alleviate labor shortages and supply chain issues.\n\nRelated to the trend of human and machine collaboration is that of AI functional safety. First seen in autonomous vehicles , more companies are looking to use AI to add proactive and flexible safety measures to industrial environments.\n\nHistorically, functional safety has been applied in industrial environments in a binary way, with the primary role of the safety function to immediately stop the equipment from causing any harm or damage when an event is triggered. AI, on the other hand, works in combination with context awareness to predict an event happening. This allows AI to proactively send alerts regarding future potential safety events, preventing the events before they happen, which can drastically reduce safety incidents and related downtime in industrial environments.\n\nNew functional safety standards that define the use of AI in safety are expected to be released in 2023 and will open the door for early adoption in factories, warehouses, agricultural use cases and more. One of the first areas for AI safety adoption will focus on improved worker safety, including worker posture detection, falling object prevention and personal protection equipment detection.\n\nCyber attacks rose 50% in 2021 and haven\u2019t slowed down since, making this a top focus for IT organizations. Edge computing, particularly when combined with AI use cases, can increase cybersecurity risk for many organizations by creating a wider attack surface outside of the traditional data center and its firewalls.\n\nEdge AI in industries like manufacturing, energy, and transportation requires IT teams to expand their security footprint into environments traditionally managed by operational technology teams. Operational technology teams typically focus on operational efficiency as their main metric, relying on air-gapped systems with no network connectivity to the outside world. Edge AI use cases will start to break down these restrictions, requiring IT to enable cloud connectivity while still maintaining strict security standards.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMTkvZWRnZS1haS10cmVuZHMtMjAyMy8=.pdf"}, {"question": "What is the role of AI in cybersecurity in 2023?", "gt_answer": "In 2023, AI will be applied to cybersecurity to protect edge devices and flag suspicious behavior by analyzing log data generated from IoT networks.", "gt_context": "With billions of devices and sensors around the world that will all be connected to the internet, IT organizations have to both protect edge devices from direct attack and consider network and cloud security. In 2023, expect to see AI applied to cybersecurity . Log data generated from IoT networks can now be fed through intelligent security models that can flag suspicious behavior and notify security teams to take action.\n\nThe term digital twin refers to perfectly synchronized, physically accurate virtual representations of real-world assets, processes or environments. Last year, NVIDIA partnered with Siemens to enable industrial metaverse use cases, helping customers accelerate their adoption of industrial automation technologies. Leading companies spanning manufacturing, retail, consumer packaged goods and telco, such as BMW , Lowe\u2019s , PepsiCo and Heavy.AI , have also begun building operational digital twins allowing them to simulate and optimize their production environments.\n\nWhat connects digital twins to the physical world and edge computing is the explosion of IoT sensors and data that is driving both these trends. In 2023, we\u2019ll see organizations increasingly connect live data from their physical environment into their virtual simulations. They\u2019ll move away from historical data-based simulations toward a live, digital environment \u2014 a true digital twin.\n\nBy connecting live data from the physical world to their digital twins, organizations can gain real-time insight into their environment, allowing them to make faster and more informed decisions. While still early, expect to see massive growth in this space next year for ecosystem providers and in customer adoption.\n\nWhile the 2023 economic environment remains uncertain, edge AI will certainly be an area of investment for organizations looking to drive automation and efficiency. Many of the trends we saw take off last year continue to accelerate with the new focus on initiatives that help drive sales, reduce costs, grow customer satisfaction and enhance operational efficiency.\n\nVisit NVIDIA\u2019s Edge Computing Solutions page to learn more about edge AI and how we\u2019re helping organizations implement it in their environments today.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/12/19/edge-ai-trends-2023/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMTkvZWRnZS1haS10cmVuZHMtMjAyMy8=.pdf"}, {"question": "What is the purpose of the NYUTron model?", "gt_answer": "The NYUTron model predicts a patient's risk of 30-day readmission and other clinical outcomes.", "gt_context": "NYU, NVIDIA Collaborate on Large Language Model to Predict Patient Readmission\n\nNYUTron, an AI model featured today in Nature, is deployed at NYU Langone Health.\n\nAuthor: Anthony Costa\n\nGetting discharged from the hospital is a major milestone for patients \u2014 but sometimes, it\u2019s not the end of their road to recovery. Nearly 15% of hospital patients in the U.S. are readmitted within 30 days of their initial discharge, which is often associated with worse outcomes and higher costs for both patients and hospitals.\n\nResearchers at NYU Langone Health, the academic medical center of New York University, have collaborated with NVIDIA experts to develop a large language model (LLM) that predicts a patient\u2019s risk of 30-day readmission, as well as other clinical outcomes.\n\nDeployed in the healthcare system\u2019s six inpatient facilities, the NYUTron model \u2014 featured today in the scientific journal Nature \u2014 provides doctors with AI-driven insights that could help them identify patients in need of a clinical intervention to reduce the likelihood of readmission.\n\n\u201cWhen you discharge a patient from the hospital, you don\u2019t expect them to need to return, or you probably should have kept them in the hospital longer,\u201d said Dr. Eric Oermann, assistant professor of radiology and neurosurgery at NYU Grossman School of Medicine and a lead collaborator on NYUTron. \u201cUsing analysis from the AI model, we could soon empower clinicians to prevent or fix situations that put patients at a higher risk of readmission.\u201d\n\nThe model has so far been applied to more than 50,000 patient discharged in NYU\u2019s healthcare system, where it shares predictions of readmission risk with physicians via email notifications. Oermann\u2019s team is next planning a clinical trial to test whether interventions based on NYUTron\u2019s analyses reduce readmission rates.\n\nThe U.S. government tracks 30-day readmission rates as an indicator of the quality of care hospitals are providing. Medical institutions with high rates are fined \u2014 a level of scrutiny that incentivizes hospitals to improve their discharge process.\n\nThere are plenty of reasons why a recently discharged patient may need to be readmitted to the hospital \u2014 among them, infection, overprescription of antibiotics, surgical drains that were removed too early. If these risk factors can be spotted earlier, doctors could intervene by adjusting treatment plans or monitoring patients in the hospital for longer.\n\n\u201cWhile there have been computational models to predict patient readmission since the 1980s, we\u2019re treating this as a natural language processing task that requires a health system-scale corpus of clinical text,\u201d Oermann said. \u201cWe trained our LLM on the unstructured data of electronic health records to see if it could capture insights that people haven\u2019t considered before.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMDcvbnl1LWxhcmdlLWxhbmd1YWdlLW1vZGVsLXBhdGllbnQtcmVhZG1pc3Npb24tbmF0dXJlLw==.pdf"}, {"question": "What kind of data was NYUTron pretrained on?", "gt_answer": "NYUTron was pretrained on 10 years of health records from NYU Langone Health, which included more than 4 billion words of clinical notes representing nearly 400,000 patients.", "gt_context": "NYUTron was pretrained on 10 years of health records from NYU Langone Health: more than 4 billion words of clinical notes representing nearly 400,000 patients. The model achieved an accuracy improvement of more than 10 percent over a state-of-the-art machine learning model to predict readmission.\n\nOnce the LLM was trained for the initial use case of 30-day readmission, the team was able to spin out four other predictive algorithms in around a week. These include predicting the length of a patient\u2019s hospital stay, the likelihood of in-hospital mortality, and the chances of a patient\u2019s insurance claims being denied.\n\n\u201cRunning a hospital is in some ways like managing a hotel,\u201d said Oermann. \u201cInsights that help hospitals operate more efficiently means more beds and better care for a greater number of patients.\u201d\n\nNYUTron is an LLM with hundreds of millions of parameters, trained using the NVIDIA NeMo Megatron framework on a large cluster of NVIDIA A100 Tensor Core GPUs .\n\n\u201cMuch of the conversation around language models right now is around gargantuan, general-purpose models with billions of parameters, trained on messy datasets using hundreds or thousands of GPUs,\u201d Oermann said. \u201cWe\u2019re instead using medium-sized models trained on highly refined data to accomplish healthcare-specific tasks.\u201d\n\nTo optimize the model for inference in real-world hospitals, the team developed a modified version of the NVIDIA Triton open-source software for streamlined AI model deployment using the NVIDIA TensorRT software development kit.\n\n\u201cTo deploy a model like this in a live healthcare environment, it has to run efficiently,\u201d Oermann said. \u201cTriton delivers everything you want in an inference framework, making our model blazing fast.\u201d\n\nOermann\u2019s team found that after pretraining their LLM, fine-tuning it onsite with a specific hospital\u2019s data helped to significantly boost accuracy \u2014 a trait that could help other healthcare institutions deploy similar models.\n\n\u201cNot all hospitals have the resources to train a large language model from scratch in-house, but they can adopt a pretrained model like NYUTron and then fine-tune it with a small sample of local data using GPUs in the cloud,\u201d he said. \u201cThat\u2019s within reach of almost everyone in healthcare.\u201d\n\nTo learn more about NYUTron, read the Nature paper and watch this NVIDIA and NYU talk on demand .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/06/07/nyu-large-language-model-patient-readmission-nature/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMDcvbnl1LWxhcmdlLWxhbmd1YWdlLW1vZGVsLXBhdGllbnQtcmVhZG1pc3Npb24tbmF0dXJlLw==.pdf"}, {"question": "What is the purpose of the partnership between NVIDIA and Foxconn?", "gt_answer": "The purpose of the partnership is to develop automated and autonomous vehicle platforms, with Foxconn manufacturing electronic control units (ECUs) based on NVIDIA DRIVE Orin for the global automotive market.", "gt_context": "Foxconn Partners With NVIDIA to Build Automated Electric Vehicles\n\nFoxconn to Manufacture NVIDIA DRIVE Orin Computers for Global Automotive Market, Integrate NVIDIA DRIVE Hyperion Sensor Architecture for EV Fleets\n\nCES\u2014NVIDIA and Hon Hai Technology Group (Foxconn), the world\u2019s largest technology manufacturer, today announced a strategic partnership to develop automated and autonomous vehicle platforms.\n\nAs part of the agreement, Foxconn will be a tier-one manufacturer, producing electronic control units (ECUs) based on NVIDIA DRIVE Orin\u2122 for the global automotive market. Foxconn manufactured electric vehicles (EVs) will feature DRIVE Orin ECUs and DRIVE Hyperion\u2122 sensors for highly automated driving capabilities.\n\n\u201cThis strategic cooperation with NVIDIA strengthens the intelligent driving solutions Foxconn will be able to provide. Together, we are enabling the industry to build energy-efficient, automated vehicles,\u201d said Eric Yeh, senior director of the Software Development Center at Foxconn. \u201cThis is a well-considered partnership that leverages unique strengths on each side in the pursuit of innovative EV development and opportunities.\u201d\n\nThe partnership with Foxconn will allow NVIDIA to further scale its efforts and meet growing industry demand as more transportation leaders select DRIVE Orin for intelligent vehicles. In addition, by building EVs on the DRIVE Hyperion qualified sensor set, Foxconn will speed up its time-to-market and time-to- cost strategies.\n\n\u201cOur partnership with Foxconn will provide OEMs developing intelligent driving solutions with a world-class supplier that can scale for volume manufacturing of the NVIDIA DRIVE Orin platform,\u201d said Rishi Dhall, vice president of automotive at NVIDIA. \u201cFoxconn\u2019s decision to also use the DRIVE Hyperion sensor suite for its EVs will help accelerate their path to production without compromising safety, reliability or quality.\u201d\n\nThe automotive-grade NVIDIA DRIVE Orin system-on-a-chip achieves up to 254 trillion operations per second and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles. NVIDIA DRIVE Hyperion is a modular development platform and reference architecture for designing autonomous vehicles. Combined, they serve as the brain and central nervous system of the vehicle, processing massive amounts of sensor data in real time so autonomous vehicles can safely perceive, plan and act.", "document": "Rm94Y29ubiAxLzMvMjMucGRm.pdf"}, {"question": "What are Hon Hai's key technologies for driving its long-term growth strategy?", "gt_answer": "Hon Hai's key technologies for driving its long-term growth strategy are new-generation communications technology, AI, and semiconductors.", "gt_context": "About Hon Hai Established in 1974 in Taiwan, Hon Hai Technology Group (\u201cFoxconn\u201d) (2317: Taiwan) is the world\u2019s largest electronics manufacturer. Hon Hai is also the leading technological solution provider, and it continuously leverages its expertise in software and hardware to integrate its unique manufacturing systems with emerging technologies. Hon Hai has expanded its capabilities into the development of electric vehicles, digital health, and robotics, and three key technologies \u2014 new- generation communications technology, AI and semiconductors \u2014 which are key to driving its long-term growth strategy.\n\nIn addition to maximizing value-creation for customers who include many of the world\u2019s leading technology companies, Hon Hai is dedicated to championing environmental sustainability in the manufacturing process and serving as a best-practices model for global enterprises. To learn more, visit www.honhai.com.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: NVIDIA\u2019s partnership with Foxconn and the benefits and impact thereof; and the benefits and impact of NVIDIA\u2019s products and technologies, including NVIDIA DRIVE Orin and NVIDIA DRIVE Hyperion are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking", "document": "Rm94Y29ubiAxLzMvMjMucGRm.pdf"}, {"question": "What does NVIDIA disclaim in their statements?", "gt_answer": "NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.", "gt_context": "statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA DRIVE, NVIDIA DRIVE Hyperion and NVIDIA DRIVE Orin are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.\n\nMarie Labrie Automotive +1-408-921-6987 mlabrie@nvidia.com Jimmy Huang Deputy Spokesman Hon Hai Technology Group (Foxconn) media@foxconn.com", "document": "Rm94Y29ubiAxLzMvMjMucGRm.pdf"}, {"question": "What is the purpose of the BioNeMo Cloud service?", "gt_answer": "The BioNeMo Cloud service is designed to accelerate the creation of new proteins and therapeutics, as well as research in the fields of genomics, chemistry, biology, and molecular dynamics.", "gt_context": "NVIDIA Unveils Large Language Models and Generative AI Service to Advance Life Sciences R&D\n\nPart of NVIDIA AI Foundations, New BioNeMo Cloud Service Accelerates Life Sciences Research, Drug Discovery and Protein Engineering; Amgen and a Dozen Startups Among Early Access Customers\n\nGTC\u2014NVIDIA today announced an expanded set of generative AI cloud services for customizing AI foundation models to accelerate the creation of new proteins and therapeutics, as well as research in the fields of genomics, chemistry, biology and molecular dynamics.\n\nPart of NVIDIA AI Foundations, the new BioNeMo\u2122 Cloud service offering \u2014 for both AI model training and inference \u2014 accelerates the most time-consuming and costly stages of drug discovery. It enables researchers to fine-tune generative AI applications on their own proprietary data, and to run AI model inference directly in a web browser or through new cloud application programming interfaces (APIs) that easily integrate into existing applications.\n\n\u201cThe transformative power of generative AI holds enormous promise for the life science and pharmaceutical industries,\u201d said Kimberly Powell, vice president of healthcare at NVIDIA. \u201cNVIDIA\u2019s long collaboration with pioneers in the field has led to the development of BioNeMo Cloud Service, which is already serving as an AI drug discovery laboratory. It provides pretrained models and allows customization of models with proprietary data that serve every stage of the drug-discovery pipeline, helping researchers identify the right target, design molecules and proteins, and predict their interactions in the body to develop the best drug candidate.\u201d\n\nAmgen Among Early Users Amgen, one of the world\u2019s leading biotechnology companies, is already using the service to advance its research and development efforts.\n\n\u201cBioNeMo is dramatically accelerating our approach to biologics discovery,\u201d said Peter Grandsard, executive director of Biologics Therapeutic Discovery, Center for Research Acceleration by Digital Innovation at Amgen. \u201cWith it, we can pretrain large language models for molecular biology on Amgen\u2019s proprietary data, enabling us to explore and develop therapeutic proteins for the next generation of medicine that will help patients.\u201d\n\nGenerative AI Supercharges Drug Discovery Pipeline BioNeMo Cloud service includes pretrained AI models to help researchers build AI pipelines for drug development. It has been adopted by drug-discovery companies including Evozyne and Insilico Medicine to support data-driven drug design for new therapeutic candidates.\n\nGenerative AI models can rapidly identify potential drug molecules \u2014 in some cases designing compounds or protein-based therapeutics from scratch. Trained on large-scale datasets of small molecules, proteins, DNA and RNA sequences, these models can predict the 3D structure of a protein and how well a molecule will dock with a target protein.", "document": "TlZJRElBIFVudmVpbHMgQmlvTmVNbyBDbG91ZCBTZXJ2aWNlIDMvMjEvMjMucGRm.pdf"}, {"question": "How did Amgen use BioNeMo's ESM model architecture?", "gt_answer": "Amgen pretrained and fine-tuned BioNeMo's ESM model architecture using its own proprietary data on antibodies.", "gt_context": "New Generative AI Models Available With BioNeMo Service Early Access BioNeMo now has six new optimized, open-source models, in addition to its previously announced MegaMolBART generative chemistry model, ESM1nv protein language model and OpenFold protein structure prediction model. They include:\n\nAlphaFold2: A deep learning model that reduces the time it takes to determine a protein\u2019s structure from years to minutes or even seconds, just by using its amino acid sequence, developed by DeepMind and already used by over a million researchers. DiffDock: To help researchers understand how a drug molecule will bind with a target protein, this model predicts the 3D orientation and docking interaction of small molecules with high accuracy and computational efficiency. ESMFold: This protein structure prediction model, using Meta AI\u2019s ESM2 protein language model, can estimate the 3D structure of a protein based on a single amino acid sequence, without requiring examples of several similar sequences. ESM2: This protein language model is used for inferring machine representations of proteins which are useful for downstream tasks such as protein structure prediction, property prediction and molecular docking. MoFlow: Used for molecular optimization and small molecule generation, this generative chemistry model creates molecules from scratch, coming up with diverse chemical structures for potential therapeutics. ProtGPT-2: This language model generates novel protein sequences to help researchers design proteins with unique structures, properties and functions.\n\nThe BioNeMo Service makes these generative AI models easily accessible through a browser-based interface for interactive inference and protein structure visualization. And by pairing BioNeMo with the supercomputing resources in NVIDIA DGX\u2122\n\nCloud, researchers can customize their models on a fully managed software service using NVIDIA Base Command\u2122 Platform and the NVIDIA AI Enterprise software suite.\n\nPharma Companies, Startups Tap BioNeMo to Optimize AI Workflows Pharmaceutical companies and drug discovery startups are using BioNeMo today and, in many cases, seeing significant results.\n\nAmgen pretrained and fine-tuned BioNeMo\u2019s ESM model architecture using its own proprietary data on antibodies. It was able to slash the time it takes to train five custom models for molecule screening and optimization from three months to a few weeks on DGX Cloud.\n\nResearchers at Evozyne, a Chicago-based biotechnology company and member of the NVIDIA Inception program for cutting-edge startups, have collaborated with NVIDIA to develop a BioNeMo-based deep learning model called the Protein Transformer Variational AutoEncoder. The generative AI model, fine-tuned on Evozyne\u2019s proprietary protein data, enables the design of synthetic variants with significantly improved performance compared to enzymes found in nature.", "document": "TlZJRElBIFVudmVpbHMgQmlvTmVNbyBDbG91ZCBTZXJ2aWNlIDMvMjEvMjMucGRm.pdf"}, {"question": "What is Insilico Medicine using BioNeMo for?", "gt_answer": "Insilico Medicine is using BioNeMo to accelerate the early drug discovery process.", "gt_context": "Insilico Medicine, a premier member of NVIDIA Inception, is using BioNeMo to accelerate the early drug discovery process, which traditionally takes more than four years and costs around $500 million. Using generative AI from end to end, Insilico was able to identify a preclinical candidate drug in one-third of the time and for one-tenth of the cost. The drug is expected to soon enter phase 2 clinical trials with patients.\n\nSign up for early access to BioNeMo.\n\nDiscover the latest in AI and healthcare \u2014 including three sessions with speakers from Amgen, a session from Evozyne and another from DeepMind on AlphaFold \u2014 at GTC, running online through Thursday, March 23. Registration is free.\n\nWatch NVIDIA founder and CEO Jensen Huang discuss the BioNeMo Cloud service in his GTC keynote on demand.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.", "document": "TlZJRElBIFVudmVpbHMgQmlvTmVNbyBDbG91ZCBTZXJ2aWNlIDMvMjEvMjMucGRm.pdf"}, {"question": "What are some of the risks and uncertainties mentioned in the press release?", "gt_answer": "Some of the risks and uncertainties mentioned in the press release include global economic conditions, reliance on third parties, technological development and competition, changes in consumer preferences or demands, and unexpected loss of performance of products or technologies when integrated into systems.", "gt_context": "Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features and availability of our collaborations with Amgen, Evozyne and Insilico Medicine; the benefits, impact, performance, features and availability of our products and technologies, including NVIDIA AI Foundations such as the new BioNeMo Cloud service offering, BioNeMo models including the MegaMolBART generative chemistry model, ESM1nv protein language model, OpenFold protein structure prediction model, AlphaFold2, DiffDock, ESMFold, ESM2, MoFlow and ProtGPT-2, NVIDIA DGX Cloud, NVIDIA Base Command Platform and the NVIDIA AI Enterprise software suite; the transformative power of generative AI holding enormous promise for the life science and pharmaceutical industries; pharmaceutical companies and drug discovery startups using BioNeMo today and, in many cases, seeing significant results; and the preclinical candidate drug soon entering phase 2 clinical trials with patients are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward- looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BioNeMo, DGX Cloud and NVIDIA Base Command are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.\n\nJanette Ciborowski +1-734-330-8817 jciborowski@nvidia.com", "document": "TlZJRElBIFVudmVpbHMgQmlvTmVNbyBDbG91ZCBTZXJ2aWNlIDMvMjEvMjMucGRm.pdf"}, {"question": "What can generative AI models create?", "gt_answer": "Generative AI models can create text, pixels, 3D objects, and realistic motion.", "gt_context": "NVIDIA CEO: Creators Will Be \u2018Supercharged\u2019 by Generative AI NVIDIA\u2019s Jensen Huang discussed AI-enhanced creativity in a conversation with Mark Read, CEO of WPP, at the Cannes Lions Festival.\n\nAuthor: Isha Salian\n\nGenerative AI will \u201csupercharge\u201d creators across industries and content types, NVIDIA founder and CEO Jensen Huang said today at the Cannes Lions Festival, on the French Riviera.\n\n\u201cFor the very first time, the creative process can be amplified in content generation, and the content generation could be in any modality \u2014 it could be text, images, 3D, videos,\u201d Huang said in a conversation with Mark Read, CEO of WPP \u2014 the world\u2019s largest marketing and communications services company.\n\nAt the event attended by thousands of creators, marketers and brand execs from around the world, Huang outlined the impact of AI on the $700 billion digital advertising industry. He also touched on the ways AI can enhance creators\u2019 abilities, as well as the importance of responsible AI development.\n\n\u201cYou can do content generation at scale, but infinite content doesn\u2019t imply infinite creativity,\u201d he said. \u201cThrough our thoughts, we have to direct this AI to generate content that has to be aligned to your values and your brand tone.\u201d\n\nThe discussion followed Huang\u2019s recent keynote at COMPUTEX, where NVIDIA and WPP announced a collaboration to develop a content engine powered by generative AI and the NVIDIA Omniverse platform for building and operating metaverse applications.\n\nNVIDIA has been pushing the boundaries of graphics technology for 30 years and been at the forefront of the AI revolution for a decade. This combination of expertise in graphics and AI uniquely positions the company to enable the new era of generative AI applications.\n\nHuang said that \u201cthe biggest moment of modern AI\u201d can be traced back to an academic contest in 2012, when a team of University of Toronto researchers led by Alex Krizhevsky showed that NVIDIA GPUs could train an AI model that recognized objects better than any computer vision algorithm that came before it.\n\nSince then, developers have taught neural networks to recognize images, videos, speech, protein structures, physics and more.\n\n\u201cYou could learn the language of almost anything,\u201d Huang said. \u201cOnce you learn the language, you can apply the language \u2014 and the application of language is generation.\u201d\n\nGenerative AI models can create text, pixels, 3D objects and realistic motion, giving professionals superpowers to more quickly bring their ideas to life. Like a creative director working with a team of artists, users can direct AI models with prompts, and fine-tune the output to align with their vision.\n\n\u201cYou have to give the machine feedback like the best creative director,\u201d Read said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjAvY3JlYXRvcnMtc3VwZXJjaGFyZ2VkLWJ5LWdlbmVyYXRpdmUtYWktY2FubmVzLWxpb25zLw==.pdf"}, {"question": "What is the key benefit of generative AI for the creative industry?", "gt_answer": "Generative AI's key benefit for the creative industry is its ability to scale up content generation, rapidly generating options for text and visuals that can be used in advertising, marketing and film.", "gt_context": "\u201cYou have to give the machine feedback like the best creative director,\u201d Read said.\n\nThese tools aren\u2019t a replacement for human creativity, Huang emphasized. They augment the skills of artists and marketing professionals to help them feed demand from clients by producing content more quickly and in multiple forms tailored to different audiences.\n\n\u201cWe will democratize content generation,\u201d Huang said.\n\nGenerative AI\u2019s key benefit for the creative industry is its ability to scale up content generation, rapidly generating options for text and visuals that can be used in advertising, marketing and film.\n\n\u201cIn the old days, you\u2019d create hundreds of different ad options that are retrieved based on the medium,\u201d Huang said. \u201cIn the future, you won\u2019t retrieve \u2014 you\u2019ll generate billions of different ads. But every single one of them has to be tone appropriate, has to be brand perfect.\u201d\n\nFor use by professional creators, these AI tools must also produce high-quality visuals that meet or exceed the standard of content captured through traditional methods.\n\nIt all starts with a digital twin , a true-to-reality simulation of a real-world physical asset. The NVIDIA Omniverse platform enables the creation of stunning, photorealistic visuals that accurately represent physics and materials \u2014 whether for images, videos, 3D objects or immersive virtual worlds.\n\n\u201cOmniverse is a virtual world,\u201d Huang said. \u201cWe created a virtual world where AI could learn how to create an AI that\u2019s physically based and grounded by physics.\u201d\n\n\u201cThis virtual world has the ability to ingest assets and content that\u2019s created by any tool, because we have this interface called USD,\u201d he said, referring to the Universal Scene Description framework for collaborating in 3D. With it, artists and designers can combine assets developed using popular tools from companies like Adobe and Autodesk with virtual worlds developed using generative AI.\n\nNVIDIA Picasso , a foundry for custom generative AI models for visual design unveiled earlier this year, also supports best-in-class image, video and 3D generative AI capabilities developed in collaboration with partners including Adobe, Getty Images and Shutterstock.\n\n\u201cWe created a platform that makes it possible for our partners to train from data that was licensed properly from, for example, Getty, Shutterstock, Adobe,\u201d Huang said. \u201cThey\u2019re respectful of the content owners. The training data comes from that source, and whatever economic benefits come from that could accrete back to the creators.\u201d\n\nLike any groundbreaking technology, it\u2019s critical that AI is developed and deployed thoughtfully, Read and Huang said. Technology to watermark AI-generated assets and to detect whether a digital asset was modified or counterfeited will support these goals.\n\n\u201c We have to put as much energy into the capabilities of AI as we do the safety of AI,\u201d Huang said. \u201cIn the world of advertising, safety is brand alignment, brand integrity, appropriate tone and truth.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjAvY3JlYXRvcnMtc3VwZXJjaGFyZ2VkLWJ5LWdlbmVyYXRpdmUtYWktY2FubmVzLWxpb25zLw==.pdf"}, {"question": "How is WPP using AI in its digital advertising?", "gt_answer": "WPP is using AI as a tool to boost creativity and personalization in digital advertising. They are building physically accurate digital twins of products using brand-specific product-design data and combining it with AI-generated objects and digital environments to create virtual sets for marketing content.", "gt_context": "As a leader in digital advertising, WPP is embracing AI as a tool to boost creativity and personalization, helping creators across the industry craft compelling messages that reach the right consumer.\n\n\u201cFrom the creative process to the customer, there\u2019s going to have to be ad agencies in the middle that understand the technology,\u201d Huang said. \u201cThat entire process in the middle requires humans in the loop. You have to understand the voice of the brand you\u2019re trying to represent.\u201d\n\nUsing Omniverse Cloud , WPP\u2019s creative professionals can build physically accurate digital twins of products using a brand\u2019s specific product-design data. This real-world data can be combined with AI-generated objects and digital environments \u2014 licensed through partners such as Adobe and Getty Images \u2014 to create virtual sets for marketing content.\n\n\u201cWPP is going to unquestionably become an AI company,\u201d Huang said. \u201cYou\u2019ll create an AI factory where the input is creativity, thoughts and prompts, and what comes out of it is content.\u201d\n\nEnhanced by responsibly trained, NVIDIA-accelerated generative AI, this content engine will boost creative teams\u2019 speed and efficiency, helping them quickly render brand-accurate advertising content at scale.\n\n\u201cThe type of content you\u2019ll be able to help your clients generate will be practically infinite,\u201d Huang said. \u201cFrom the days of hundreds of examples of content that you create for a particular brand or for a particular campaign, it\u2019s going to eventually become billions of generated content for every individual.\u201d\n\nLearn more about NVIDIA\u2019s collaboration with WPP .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/06/20/creators-supercharged-by-generative-ai-cannes-lions/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjAvY3JlYXRvcnMtc3VwZXJjaGFyZ2VkLWJ5LWdlbmVyYXRpdmUtYWktY2FubmVzLWxpb25zLw==.pdf"}, {"question": "What game is being released tomorrow?", "gt_answer": "LEGO Brawls", "gt_context": "GFN Thursday Slides Into September With 22 New Games\n\nPlan this month\u2019s adventure with the latest additions, including 19 day-and-date releases.\n\nAuthor: GeForce NOW Community\n\nWe\u2019d wake you up when September ends, but then you\u2019d miss out on a whole new set of games coming to GeForce NOW .\n\nGear up for 22 games joining the GeForce NOW library , with 19 day-and-date releases including action role-playing game Steelrising . Playing them all will take some serious strategy.\n\nAnd build the perfect Minifigure Fighter in LEGO Brawls , one of 10 new additions streaming this week.\n\nFinally, did you hear? The 2.0.44 update, starting to roll out now and continuing over the next week, is bringing new audio modes to the PC and Mac apps. Priority members can experience support for 5.1 Surround sound, and GeForce NOW RTX 3080 members can enjoy support for both 5.1 and 7.1 surround sound.\n\nThe revolution is streaming from the cloud. GeForce NOW brings 22 new titles in September to nearly all devices. Steel yourself for the challenging action-RPG Steelrising , arriving later this month at launch with RTX ON.\n\nPlay as Aegis, a mechanical masterpiece, and save France from the madness of King Louis XVI and his army of mechanical soldiers. String together dodges, parries, jumps and devastating attacks to fight through Paris. Encounter allies and enemies in historical figures like Marie Antoinette, Lafayette, Robespierre and more.\n\nLead the revolution across low-end PCs, Macs and mobile phones . Experience Steelrising with beautiful, cinematic graphics turning RTX ON and take cloud gaming to the next level by upgrading to the RTX 3080 membership , streaming at 4K resolution on PC and Mac native apps.\n\nCheck out the full list of games coming in September:\n\nTRAIL OUT (New release on Steam , Sept. 7)\n\nSteelrising (New release on Steam and Epic Games Store , Sept. 8)\n\nBroken Pieces (New release on Steam , Sept. 9)\n\nIsonzo (New release on Steam and Epic Games Store , Sept. 13)\n\nLittle Orpheus (New release on Steam and Epic Games Store , Sept. 13)\n\nQ.U.B.E. 10th Anniversary (New release on Steam , Sept. 14)\n\nMetal: Hellsinger (New release on Steam , Sept. 15)\n\nStones Keeper (New release on Steam , Sept. 15)\n\nSBK 22 (New release on Steam , Sept. 15)\n\nConstruction Simulator (New release on Steam , Sept. 20)\n\nSoulstice (New release on Steam , Sept. 20)\n\nThe Legend of Heroes: Trails from Zero (New release on Steam and Epic Games Store , Sept. 27)\n\nBrewmaster: Beer Brewing Simulator (New release on Steam , Sept. 29)\n\nJagged Alliance: Rage! ( Steam )\n\nWeable ( Steam )\n\nAnimal Shelter ( Steam )\n\nRiver City Saga: Three Kingdoms ( Steam )\n\nGround Branch ( Steam )\n\nThe September gaming fun starts with 10 new games streaming this week, including tomorrow\u2019s release of LEGO Brawls , streaming on GeForce NOW for PC, macOS, and Chrome OS and web browsers.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMDEvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktc2VwdGVtYmVyLTEv.pdf"}, {"question": "What are some of the new games released recently?", "gt_answer": "The new games released recently include Call of the Wild: The Angler, F1 Manager 2022, Scathe, Gerda: A Flame in Winter, MythBusters: The Game - Crazy Experiments Simulator, LEGO Brawls, Arcade Paradise, Dark Deity, Hotline Miami 2: Wrong Number, and Lumencraft.", "gt_context": "Dream up the ultimate LEGO Minifigure brawlers and bash your way into the first team-action brawler set in the LEGO universe. Design heroes with unique styles, strategies and personalities \u2014 and level them up for unlockable content. Team up and brawl 4v4, party with friends or play in a battle-royale-style game mode to beat the competition. With ultra-low latency, there\u2019s no need to worry about lagging behind.\n\nCatch the complete list of games streaming this week:\n\nCall of the Wild: The Angler (New release on Steam and Epic Games Store )\n\nF1 Manager 2022 (New release on Steam and Epic Games Store )\n\nScathe (New release on Steam )\n\nGerda: A Flame in Winter (New release on Steam , Sept. 1)\n\nMythBusters: The Game \u2013 Crazy Experiments Simulator (New release on Steam , Sept. 1)\n\nLEGO Brawls (New release on Steam , Sept. 2)\n\nArcade Paradise ( Epic Games Store )\n\nDark Deity ( Epic Games Store )\n\nHotline Miami 2: Wrong Number ( Steam )\n\nLumencraft ( Steam )\n\nOn top of the 38 games announced last month, an extra four came to the cloud in August:\n\nDestiny 2 ( Epic Games Store )\n\nGuild Wars 2 ( Steam )\n\nTyrant\u2019s Blessing ( Epic Games Store )\n\nWarhammer 40,000: Mechanicus ( Epic Games Store )\n\nOne game announced last month, Mondealy ( Steam ), didn\u2019t make it due to a shift in the release date.\n\nWith all of these sweet new games to play, we want to know what snack is powering your gaming sessions up. Let us know on Twitter or in the comments below.\n\nWhat is your go-to snack while playing games? \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) August 31, 2022\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/01/geforce-now-thursday-september-1/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMDEvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktc2VwdGVtYmVyLTEv.pdf"}, {"question": "What are the benefits of generative AI?", "gt_answer": "Generative AI allows users to quickly create text, images, 3D models, and more, enabling new business models and accelerating existing ones.", "gt_context": "NVIDIA Takes Inference to New Heights Across MLPerf Tests\n\nNVIDIA H100 and L4 GPUs took generative AI and all other workloads to new levels in the latest MLPerf benchmarks, while Jetson AGX Orin made performance and efficiency gains.\n\nAuthor: Dave Salvator\n\nMLPerf remains the definitive measurement for AI performance as an independent, third-party benchmark. NVIDIA\u2019s AI platform has consistently shown leadership across both training and inference since the inception of MLPerf, including the MLPerf Inference 3.0 benchmarks released today.\n\n\u201cThree years ago when we introduced A100, the AI world was dominated by computer vision. Generative AI has arrived,\u201d said NVIDIA founder and CEO Jensen Huang.\n\n\u201cThis is exactly why we built Hopper, specifically optimized for GPT with the Transformer Engine. Today\u2019s MLPerf 3.0 highlights Hopper delivering 4x more performance than A100.\n\n\u201cThe next level of Generative AI requires new AI infrastructure to train large language models with great energy efficiency. Customers are ramping Hopper at scale, building AI infrastructure with tens of thousands of Hopper GPUs connected by NVIDIA NVLink and InfiniBand.\n\n\u201cThe industry is working hard on new advances in safe and trustworthy Generative AI. Hopper is enabling this essential work,\u201d he said.\n\nThe latest MLPerf results show NVIDIA taking AI inference to new levels of performance and efficiency from the cloud to the edge.\n\nSpecifically, NVIDIA H100 Tensor Core GPUs running in DGX H100 systems delivered the highest performance in every test of AI inference, the job of running neural networks in production. Thanks to software optimizations , the GPUs delivered up to 54% performance gains from their debut in September.\n\nIn healthcare, H100 GPUs delivered a 31% performance increase since September on 3D-UNet, the MLPerf benchmark for medical imaging.\n\nPowered by its Transformer Engine , the H100 GPU, based on the Hopper architecture, excelled on BERT, a transformer-based large language model that paved the way for today\u2019s broad use of generative AI.\n\nGenerative AI lets users quickly create text, images, 3D models and more. It\u2019s a capability companies from startups to cloud service providers are rapidly adopting to enable new business models and accelerate existing ones.\n\nHundreds of millions of people are now using generative AI tools like ChatGPT \u2014 also a transformer model \u2014 expecting instant responses.\n\nAt this iPhone moment of AI, performance on inference is vital. Deep learning is now being deployed nearly everywhere, driving an insatiable need for inference performance from factory floors to online recommendation systems .\n\nNVIDIA L4 Tensor Core GPUs made their debut in the MLPerf tests at over 3x the speed of prior-generation T4 GPUs. Packaged in a low-profile form factor, these accelerators are designed to deliver high throughput and low latency in almost any server.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMDUvaW5mZXJlbmNlLW1scGVyZi1haS8=.pdf"}, {"question": "What is the advantage of using L4 GPUs for the BERT model?", "gt_answer": "L4 GPUs deliver stunning results on the performance-hungry BERT model due to their support for the key FP8 format.", "gt_context": "L4 GPUs ran all MLPerf workloads. Thanks to their support for the key FP8 format, their results were particularly stunning on the performance-hungry BERT model.\n\nIn addition to stellar AI performance, L4 GPUs deliver up to 10x faster image decode, up to 3.2x faster video processing and over 4x faster graphics and real-time rendering performance.\n\nAnnounced two weeks ago at GTC , these accelerators are already available from major systems makers and cloud service providers . L4 GPUs are the latest addition to NVIDIA\u2019s portfolio of AI inference platforms launched at GTC.\n\nNVIDIA\u2019s full-stack AI platform showed its leadership in a new MLPerf test.\n\nThe so-called network-division benchmark streams data to a remote inference server. It reflects the popular scenario of enterprise users running AI jobs in the cloud with data stored behind corporate firewalls.\n\nOn BERT, remote NVIDIA DGX A100 systems delivered up to 96% of their maximum local performance, slowed in part because they needed to wait for CPUs to complete some tasks. On the ResNet-50 test for computer vision, handled solely by GPUs, they hit the full 100%.\n\nBoth results are thanks, in large part, to NVIDIA Quantum Infiniband networking, NVIDIA ConnectX SmartNICs and software such as NVIDIA GPUDirect .\n\nSeparately, the NVIDIA Jetson AGX Orin system-on-module delivered gains of up to 63% in energy efficiency and 81% in performance compared with its results a year ago. Jetson AGX Orin supplies inference when AI is needed in confined spaces at low power levels, including on systems powered by batteries.\n\nFor applications needing even smaller modules drawing less power, the Jetson Orin NX 16G shined in its debut in the benchmarks. It delivered up to 3.2x the performance of the prior-generation Jetson Xavier NX processor.\n\nThe MLPerf results show NVIDIA AI is backed by the industry\u2019s broadest ecosystem in machine learning.\n\nTen companies submitted results on the NVIDIA platform in this round. They came from the Microsoft Azure cloud service and system makers including ASUS, Dell Technologies , GIGABYTE, New H3C Information Technologies, Lenovo , Nettrix, Supermicro and xFusion.\n\nTheir work shows users can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.\n\nNVIDIA partners participate in MLPerf because they know it\u2019s a valuable tool for customers evaluating AI platforms and vendors. Results in the latest round demonstrate that the performance they deliver today will grow with the NVIDIA platform.\n\nNVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing. Its versatile performance and efficiency make users the real winners.\n\nReal-world applications typically employ many neural networks of different kinds that often need to deliver answers in real time.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMDUvaW5mZXJlbmNlLW1scGVyZi1haS8=.pdf"}, {"question": "What is the purpose of MLPerf benchmarks?", "gt_answer": "The purpose of MLPerf benchmarks is to cover popular AI workloads and ensure dependable and flexible performance for IT decision makers.", "gt_context": "For example, an AI application may need to understand a user\u2019s spoken request, classify an image, make a recommendation and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.\n\nThe MLPerf benchmarks cover these and other popular AI workloads. That\u2019s why the tests ensure IT decision makers will get performance that\u2019s dependable and flexible to deploy.\n\nUsers can rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Arm, Baidu, Facebook AI, Google, Harvard, Intel, Microsoft, Stanford and the University of Toronto.\n\nThe software layer of the NVIDIA AI platform, NVIDIA AI Enterprise , ensures users get optimized performance from their infrastructure investments as well as the enterprise-grade support, security and reliability required to run AI in the corporate data center.\n\nAll the software used for these tests is available from the MLPerf repository , so anyone can get these world-class results.\n\nOptimizations are continuously folded into containers available on NGC , NVIDIA\u2019s catalog for GPU-accelerated software. The catalog hosts NVIDIA TensorRT , used by every submission in this round to optimize AI inference.\n\nRead this technical blog for a deeper dive into the optimizations fueling NVIDIA\u2019s MLPerf performance and efficiency.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/04/05/inference-mlperf-ai/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMDUvaW5mZXJlbmNlLW1scGVyZi1haS8=.pdf"}, {"question": "Which startup is using NVIDIA Jetson-enabled sidewalk robots for last-mile deliveries?", "gt_answer": "Oakland-based startup Cartken", "gt_context": "Top Food Stories From 2022: Meet 4 Startups Putting AI on the Plate\n\nAuthor: Isha Salian\n\nThis holiday season, feast on the bounty of food-themed stories NVIDIA Blog readers gobbled up in 2022.\n\nStartups in the retail industry \u2014 and particularly in quick-service restaurants \u2014 are using NVIDIA AI and robotics technology to make it easier to order food in drive-thrus, find beverages on store shelves and have meals delivered. They\u2019re accelerated by NVIDIA Inception , a program that offers go-to-market support, expertise and technology for cutting-edge startups.\n\nFor those who prefer eye candy, artists also recreated a ramen restaurant using the NVIDIA Omniverse platform for creating and operating metaverse applications.\n\nToronto startup HuEx is developing a conversational AI assistant to handle order requests at the drive-thru speaker box. The real-time voice service, which runs on the NVIDIA Jetson edge AI platform , transcribes voice orders to text for staff members to fulfill.\n\nThe technology, integrated with the existing drive-thru headset system, allows for team members to hear the orders and jump in to assist if needed. It\u2019s in pilot tests to help support service at popular Canadian fast-service chains.\n\nSan Diego-based startup Vistry is tackling a growing labor shortage among quick-service restaurants with an AI-enabled, automated order-taking solution. The system, built with the NVIDIA Riva software development kit, uses natural language processing for menu understanding and speech \u2014 plus recommendation systems to enable faster, more accurate order-taking and more relevant, personalized offers.\n\nVistry is also using the NVIDIA Metropolis application framework to create computer vision applications that can help automate curbside check-ins, speed up drive-thrus and predict the time it takes to prepare a customer\u2019s order. Its tools are powered by NVIDIA Jetson and NVIDIA A2 Tensor Core GPUs .\n\nOakland-based startup Cartken is deploying NVIDIA Jetson-enabled sidewalk robots for last-mile deliveries of coffee and meals. Its autonomous mobile robot technology is used to deliver Grubhub orders to students at the University of Arizona and Ohio State \u2014 and Starbucks goods in malls in Japan.\n\nThe Inception member relies on the NVIDIA Jetson AGX Orin module to run six cameras that aid in simultaneous localization and mapping, navigation, and wheel odometry.\n\nTelexistence, an Inception startup based in Tokyo, is deploying hundreds of NVIDIA AI-powered robots to restock shelves at FamilyMart, a leading Japanese convenience store chain. The robots handle repetitive tasks like refilling beverage displays, which frees up retail staff to interact with customers.\n\nFor AI model training, the team relied on NVIDIA DGX systems . The robot uses the NVIDIA Jetson AGX Xavier for AI processing at the edge, and the NVIDIA Jetson TX2 module to transmit video-streaming data.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMjIvdG9wLWFpLWZvb2Qtc3Rvcmllcy8=.pdf"}, {"question": "What is the purpose of showcasing the Tokyo ramen shop scene?", "gt_answer": "The purpose is to highlight the capabilities of NVIDIA RTX-powered real-time rendering and physics simulation.", "gt_context": "NVIDIA technology isn\u2019t just accelerating food-related applications for the restaurant industry \u2014 it\u2019s also powering tantalizing virtual scenes complete with mouth-watering, calorie-free dishes.\n\nTwo dozen NVIDIA artists and freelancers around the globe showcased the capabilities of NVIDIA Omniverse by recreating a Tokyo ramen shop in delicious detail \u2014 including simmering pots of\n\nnoodles, steaming dumplings and bottled drinks.\n\nThe scene, created to highlight NVIDIA RTX -powered real-time rendering and physics simulation capabilities, consists of more than 22 million triangles, 350 unique textured models and 3,000 4K-resolution texture maps.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/12/22/top-ai-food-stories/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMjIvdG9wLWFpLWZvb2Qtc3Rvcmllcy8=.pdf"}, {"question": "What is the benchmark completion time for the GPT-3-based training benchmark on a cluster of 3,584 H100 GPUs?", "gt_answer": "The benchmark was completed in less than eleven minutes.", "gt_context": "NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark\n\nIn a new industry-standard benchmark, a cluster of 3,584 H100 GPUs at cloud service provider CoreWeave completed a massive GPT-3-based benchmark in just 11 minutes.\n\nAuthor: Dave Salvator\n\nLeading users and industry-standard benchmarks agree: NVIDIA H100 Tensor Core GPUs deliver the best AI performance, especially on the large language models ( LLMs ) powering generative AI .\n\nH100 GPUs set new records on all eight tests in the latest MLPerf training benchmarks released today, excelling on a new MLPerf test for generative AI. That excellence is delivered both per-accelerator and at-scale in massive servers.\n\nFor example, on a commercially available cluster of 3,584 H100 GPUs co-developed by startup Inflection AI and operated by CoreWeave , a cloud service provider specializing in GPU-accelerated workloads, the system completed the massive GPT-3-based training benchmark in less than eleven minutes.\n\n\u201cOur customers are building state-of-the-art generative AI and LLMs at scale today, thanks to our thousands of H100 GPUs on fast, low-latency InfiniBand networks,\u201d said Brian Venturo, co-founder and CTO of CoreWeave. \u201cOur joint MLPerf submission with NVIDIA clearly demonstrates the great performance our customers enjoy.\u201d\n\nInflection AI harnessed that performance to build the advanced LLM behind its first personal AI, Pi , which stands for personal intelligence . The company will act as an AI studio, creating personal AIs users can interact with in simple, natural ways.\n\n\u201cAnyone can experience the power of a personal AI today based on our state-of-the-art large language model that was trained on CoreWeave\u2019s powerful network of H100 GPUs,\u201d said Mustafa Suleyman, CEO of Inflection AI.\n\nCo-founded in early 2022 by Mustafa and Kar\u00e9n Simonyan of DeepMind and Reid Hoffman, Inflection AI aims to work with CoreWeave to build one of the largest computing clusters in the world using NVIDIA GPUs.\n\nThese user experiences reflect the performance demonstrated in the MLPerf benchmarks announced today .\n\nH100 GPUs delivered the highest performance on every benchmark, including large language models, recommenders, computer vision, medical imaging and speech recognition. They were the only chips to run all eight tests, demonstrating the versatility of the NVIDIA AI platform.\n\nTraining is typically a job run at scale by many GPUs working in tandem. On every MLPerf test, H100 GPUs set new at-scale performance records for AI training.\n\nOptimizations across the full technology stack enabled near linear performance scaling on the demanding LLM test as submissions scaled from hundreds to thousands of H100 GPUs.\n\nIn addition, CoreWeave delivered from the cloud similar performance to what NVIDIA achieved from an AI supercomputer running in a local data center. That\u2019s a testament to the low-latency networking of the NVIDIA Quantum-2 InfiniBand networking CoreWeave uses.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjcvZ2VuZXJhdGl2ZS1haS1kZWJ1dC1tbHBlcmYv.pdf"}, {"question": "What is NVIDIA AI Enterprise?", "gt_answer": "NVIDIA AI Enterprise is the software layer of the NVIDIA AI platform that enables optimized performance on leading accelerated computing infrastructure.", "gt_context": "In this round, MLPerf also updated its benchmark for recommendation systems.\n\nThe new test uses a larger data set and a more modern AI model to better reflect the challenges cloud service providers face. NVIDIA was the only company to submit results on the enhanced benchmark.\n\nNearly a dozen companies submitted results on the NVIDIA platform in this round. Their work shows NVIDIA AI is backed by the industry\u2019s broadest ecosystem in machine learning.\n\nSubmissions came from major system makers that include ASUS, Dell Technologies, GIGABYTE, Lenovo, and QCT. More than 30 submissions ran on H100 GPUs.\n\nThis level of participation lets users know they can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.\n\nNVIDIA ecosystem partners participate in MLPerf because they know it\u2019s a valuable tool for customers evaluating AI platforms and vendors.\n\nThe benchmarks cover workloads users care about \u2014 computer vision, translation and reinforcement learning, in addition to generative AI and recommendation systems .\n\nUsers can rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Arm, Baidu, Facebook AI, Google, Harvard, Intel, Microsoft, Stanford and the University of Toronto.\n\nMLPerf results are available today on H100, L4 and NVIDIA Jetson platforms across AI training, inference and HPC benchmarks. We\u2019ll be making submissions on NVIDIA Grace Hopper systems in future MLPerf rounds as well.\n\nAs AI\u2019s performance requirements grow, it\u2019s essential to expand the efficiency of how that performance is achieved. That\u2019s what accelerated computing does.\n\nData centers accelerated with NVIDIA GPUs use fewer server nodes, so they use less rack space and energy. In addition, accelerated networking boosts efficiency and performance, and ongoing software optimizations bring x-factor gains on the same hardware.\n\nEnergy-efficient performance is good for the planet and business, too. Increased performance can speed time to market and let organizations build more advanced applications.\n\nEnergy efficiency also reduces costs because data centers accelerated with NVIDIA GPUs use fewer server nodes. Indeed, NVIDIA powers 22 of the top 30 supercomputers on the latest Green500 list .\n\nNVIDIA AI Enterprise , the software layer of the NVIDIA AI platform, enables optimized performance on leading accelerated computing infrastructure. The software comes with the enterprise-grade support, security and reliability required to run AI in the corporate data center.\n\nAll the software used for these tests is available from the MLPerf repository, so virtually anyone can get these world-class results.\n\nOptimizations are continuously folded into containers available on NGC , NVIDIA\u2019s catalog for GPU-accelerated software.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjcvZ2VuZXJhdGl2ZS1haS1kZWJ1dC1tbHBlcmYv.pdf"}, {"question": "What is the purpose of the technical blog?", "gt_answer": "The purpose of the technical blog is to provide a deeper dive into the optimizations fueling NVIDIA's MLPerf performance and efficiency.", "gt_context": "Read this technical blog for a deeper dive into the optimizations fueling NVIDIA\u2019s MLPerf performance and efficiency.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/06/27/generative-ai-debut-mlperf/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMjcvZ2VuZXJhdGl2ZS1haS1kZWJ1dC1tbHBlcmYv.pdf"}, {"question": "What is NVIDIA Spectrum-X?", "gt_answer": "NVIDIA Spectrum-X is an accelerated networking platform designed to improve the performance and efficiency of Ethernet-based AI clouds.", "gt_context": "NVIDIA Launches Accelerated Ethernet Platform for Hyperscale Generative AI\n\nNew NVIDIA Spectrum-X Networking Platform Combines NVIDIA Spectrum-4, BlueField-3 DPUs and Acceleration Software; World-Leading Cloud Service Providers Adopting Platform to Scale Out Generative AI Services\n\nCOMPUTEX\u2014NVIDIA today announced NVIDIA Spectrum-X\u2122, an accelerated networking platform designed to improve the performance and efficiency of Ethernet-based AI clouds.\n\nNVIDIA Spectrum-X is built on networking innovations powered by the tight coupling of the NVIDIA Spectrum-4 Ethernet switch with the NVIDIA BlueField\u00ae-3 DPU, achieving 1.7x better overall AI performance and power efficiency, along with consistent, predictable performance in multi-tenant environments. Spectrum-X is supercharged by NVIDIA acceleration software and software development kits (SDKs), allowing developers to build software-defined, cloud-native AI applications.\n\nThe delivery of end-to-end capabilities reduces run-times of massive transformer-based generative AI models. This allows network engineers, AI data scientists and cloud service providers to improve results and make informed decisions faster.\n\nThe world\u2019s top hyperscalers are adopting NVIDIA Spectrum-X, including industry-leading cloud innovators.\n\nAs a blueprint and testbed for NVIDIA Spectrum-X reference designs, NVIDIA is building Israel-1, a hyperscale generative AI supercomputer to be deployed in its Israeli data center on Dell PowerEdge XE9680 servers based on the NVIDIA HGX\u2122 H100 eight-GPU platform, BlueField-3 DPUs and Spectrum-4 switches.\n\n\u201cTransformative technologies such as generative AI are forcing every enterprise to push the boundaries of data center performance in pursuit of competitive advantage,\u201d said Gilad Shainer, senior vice president of networking at NVIDIA. \u201cNVIDIA Spectrum-X is a new class of Ethernet networking that removes barriers for next-generation AI workloads that have the potential to transform entire industries.\u201d\n\nThe NVIDIA Spectrum-X networking platform is highly versatile and can be used in various AI applications. It uses fully standards-based Ethernet and is interoperable with Ethernet-based stacks.\n\nThe platform starts with Spectrum-4, the world\u2019s first 51Tb/sec Ethernet switch built specifically for AI networks. Advanced RoCE extensions work in concert across the Spectrum-4 switches, BlueField-3 DPUs and NVIDIA LinkX optics to create an end-to-end 400GbE network that is optimized for AI clouds.\n\nNVIDIA Spectrum-X enhances multi-tenancy with performance isolation to ensure tenants\u2019 AI workloads perform optimally and consistently. It also offers better AI performance visibility, as it can identify performance bottlenecks and it features completely automated fabric validation.", "document": "TlZJRElBIEV0aGVybmV0IFBsYXRmb3JtIDUvMjgvMjMucGRm.pdf"}, {"question": "Which companies offer NVIDIA Spectrum-X?", "gt_answer": "Companies offering NVIDIA Spectrum-X include Dell Technologies, Lenovo and Supermicro.", "gt_context": "Acceleration software driving Spectrum-X includes powerful NVIDIA SDKs such as Cumulus Linux, pure SONiC and NetQ \u2014 which together enable the networking platform\u2019s extreme performance. It also includes the NVIDIA DOCA\u2122 software framework, which is at the heart of BlueField DPUs.\n\nNVIDIA Spectrum-X enables unprecedented scale of 256 200Gb/s ports connected by a single switch, or 16,000 ports in a two-tier leaf-spine topology to support the growth and expansion of AI clouds while maintaining high levels of performance and minimizing network latency.\n\nImmediate Ecosystem Adoption Companies offering NVIDIA Spectrum-X include Dell Technologies, Lenovo and Supermicro.\n\nAvailability NVIDIA Spectrum-X, Spectrum-4 switches, BlueField-3 DPUs and 400G LinkX optics are available now.\n\nLearn more about NVIDIA Spectrum-X at COMPUTEX.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the industrial metaverse. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance,", "document": "TlZJRElBIEV0aGVybmV0IFBsYXRmb3JtIDUvMjgvMjMucGRm.pdf"}, {"question": "What are some of the collaborations mentioned in the paragraph?", "gt_answer": "The paragraph mentions collaborations with Dell Technologies, Lenovo, and Supermicro.", "gt_context": "features and availability of our products, collaborations, services and technologies, including NVIDIA Spectrum-X networking platform, Spectrum-4 switches, BlueField-3 DPUs, NVIDIA acceleration software and SDKs, Israel-1, NVIDIA HGX H100 eight-GPU platform, RoCE, LinkX, 400G LinkX optics, Cumulus Linux, pure SONiC, NetQ and NVIDIA DOCA; our collaborations with Dell Technologies, Lenovo and Supermicro, and the benefits, impact, features and availability thereof; the world\u2019s top hyperscalers adopting NVIDIA Spectrum-X; and transformative technologies such as generative AI forcing every enterprise to push the boundaries of data center performance in pursuit of competitive advantage are forward- looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward- looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, NVIDIA DOCA, NVIDIA HGX, NVIDIA Spectrum and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.\n\nAlex Shapiro Enterprise Networking 1-415-608-5044 ashapiro@nvidia.com", "document": "TlZJRElBIEV0aGVybmV0IFBsYXRmb3JtIDUvMjgvMjMucGRm.pdf"}, {"question": "What is Evozyne's approach to engineering proteins?", "gt_answer": "Evozyne's approach can alter half or more of the amino acids in a protein in a single round, allowing them to explore proteins never seen before that have new and useful functions.", "gt_context": "NVIDIA, Evozyne Create Generative AI Model for Proteins Scientists use NVIDIA BioNeMo for large language models that generate high-quality proteins that can speed drug design and help create a more sustainable environment.\n\nAuthor: Rick Merritt\n\nUsing a pretrained AI model from NVIDIA, startup Evozyne created two proteins with significant potential in healthcare and clean energy.\n\nA joint paper released today describes the process and the biological building blocks it produced. One aims to cure a congenital disease, another is designed to consume carbon dioxide to reduce global warming.\n\nInitial results show a new way to accelerate drug discovery and more.\n\n\u201cIt\u2019s been really encouraging that even in this first round the AI model has produced synthetic proteins as good as naturally occurring ones,\u201d said Andrew Ferguson, Evozyne\u2019s co-founder and a co-author of the paper. \u201cThat tells us it\u2019s learned nature\u2019s design rules correctly.\u201d\n\nEvozyne used NVIDIA\u2019s implementation of ProtT5, a transformer model that\u2019s part of NVIDIA BioNeMo , a software framework and service for creating AI models for healthcare.\n\n\u201cBioNeMo really gave us everything we needed to support model training and then run jobs with the model very inexpensively \u2014 we could generate millions of sequences in just a few seconds,\u201d said Ferguson, a molecular engineer working at the intersection of chemistry and machine learning.\n\nThe model lies at the heart of Evovyne\u2019s process called ProT-VAE. It\u2019s a workflow that combines BioNeMo with a variational autoencoder that acts as a filter.\n\n\u201cUsing large language models combined with variational autoencoders to design proteins was not on anybody\u2019s radar just a few years ago,\u201d he said.\n\nLike a student reading a book, NVIDIA\u2019s transformer model reads sequences of amino acids in millions of proteins. Using the same techniques neural networks employ to understand text, it learned how nature assembles these powerful building blocks of biology.\n\nThe model then predicted how to assemble new proteins suited for functions Evozyne wants to address.\n\n\u201cThe technology is enabling us to do things that were pipe dreams 10 years ago,\u201d he said.\n\nMachine learning helps navigate the astronomical number of possible protein sequences, then efficiently identifies the most useful ones.\n\nThe traditional method of engineering proteins, called directed evolution, uses a slow, hit-or-miss approach. It typically only changes a few amino acids in sequence at a time.\n\nBy contrast, Evozyne\u2019s approach can alter half or more of the amino acids in a protein in a single round. That\u2019s the equivalent of making hundreds of mutations.\n\n\u201cWe\u2019re taking huge jumps which allows us to explore proteins never seen before that have new and useful functions,\u201d he said.\n\nUsing the new process, Evozyne plans to build a range of proteins to fight diseases and climate change.\n\n\u201cNVIDIA\u2019s been an incredible partner on this work,\u201d he said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTIvZ2VuZXJhdGl2ZS1haS1wcm90ZWlucy1ldm96eW5lLw==.pdf"}, {"question": "What impact did NVIDIA have on Evozyne's work?", "gt_answer": "NVIDIA scaled jobs to multiple GPUs, speeding up training and reducing the time to train large AI models from months to a week.", "gt_context": "\u201cNVIDIA\u2019s been an incredible partner on this work,\u201d he said.\n\n\u201cThey scaled jobs to multiple GPUs to speed up training,\u201d said Joshua Moller, a data scientist at Evozyne. \u201cWe were getting through entire datasets every minute.\u201d\n\nThat reduced the time to train large AI models from months to a week. \u201cIt allowed us to train models \u2014 some with billions of trainable parameters \u2014 that just would not be possible otherwise,\u201d Ferguson said.\n\nThe horizon for AI-accelerated protein engineering is wide.\n\n\u201cThe field is moving incredibly quickly, and I\u2019m really excited to see what comes next,\u201d he said, noting the recent rise of diffusion models.\n\n\u201cWho knows where we will be in five years\u2019 time.\u201d\n\nSign up for early access to the NVIDIA BioNeMo to see how it can accelerate your applications.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/12/generative-ai-proteins-evozyne/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTIvZ2VuZXJhdGl2ZS1haS1wcm90ZWlucy1ldm96eW5lLw==.pdf"}, {"question": "What advancements were showcased at the Microsoft Build developer conference?", "gt_answer": "At the Microsoft Build developer conference, NVIDIA and Microsoft showcased advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI.", "gt_context": "NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI\n\nIndustry leaders break down barriers to enable developers to easily train and deploy advanced AI models on Windows 11, and deliver power-efficient inferencing on RTX PCs and workstations.\n\nAuthor: Jesse Clayton\n\nGenerative AI \u2014 in the form of large language model (LLM) applications like ChatGPT, image generators such as Stable Diffusion and Adobe Firefly, and game rendering techniques like NVIDIA DLSS 3 Frame Generation \u2014 is rapidly ushering in a new era of computing for productivity, content creation, gaming and more.\n\nAt the Microsoft Build developer conference, NVIDIA and Microsoft today showcased a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI .\n\nMore than 400 Windows apps and games already employ AI technology, accelerated by dedicated processors on RTX GPUs called Tensor Cores. Today\u2019s announcements, which include tools to develop AI on Windows PCs, frameworks to optimize and deploy AI, and driver performance and efficiency improvements, will empower developers to build the next generation of Windows apps with generative AI at their core.\n\n\u201cAI will be the single largest driver of innovation for Windows customers in the coming years,\u201d said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. \u201cBy working in concert with NVIDIA on hardware and software optimizations, we\u2019re equipping developers with a transformative, high-performance, easy-to-deploy experience.\u201d\n\nAI development has traditionally taken place on Linux, requiring developers to either dual-boot their systems or use multiple PCs to work in their AI development OS while still accessing the breadth and depth of the Windows ecosystem.\n\nOver the past few years, Microsoft has been building a powerful capability to run Linux directly within the Windows OS, called Windows Subsystem for Linux (WSL). NVIDIA has been working closely with Microsoft to deliver GPU acceleration and support for the entire NVIDIA AI software stack inside WSL. Now developers can use Windows PC for all their local AI development needs with support for GPU-accelerated deep learning frameworks on WSL.\n\nWith NVIDIA RTX GPUs delivering up to 48GB of RAM in desktop workstations, developers can now work with models on Windows that were previously only available on servers. The large memory also improves the performance and quality for local fine-tuning of AI models, enabling designers to customize them to their own style or content. And because the same NVIDIA AI software stack runs on NVIDIA data center GPUs, it\u2019s easy for developers to push their models to Microsoft Azure Cloud for large training runs.\n\nWith trained models in hand, developers need to optimize and deploy AI for target devices.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMjMvbWljcm9zb2Z0LWJ1aWxkLW52aWRpYS1haS13aW5kb3dzLXJ0eC8=.pdf"}, {"question": "What is the purpose of the Microsoft Olive toolchain?", "gt_answer": "The purpose of the Microsoft Olive toolchain is to optimize and convert PyTorch models to ONNX, allowing developers to tap into GPU hardware acceleration and deploy Tensor Core-accelerated models to PC or cloud.", "gt_context": "With trained models in hand, developers need to optimize and deploy AI for target devices.\n\nMicrosoft released the Microsoft Olive toolchain for optimization and conversion of PyTorch models to ONNX, enabling developers to automatically tap into GPU hardware acceleration such as RTX Tensor Cores. Developers can optimize models via Olive and ONNX, and deploy Tensor Core-accelerated models to PC or cloud. Microsoft continues to invest in making PyTorch and related tools and frameworks work seamlessly with WSL to provide the best AI model development experience.\n\nOnce deployed, generative AI models demand incredible inference performance. RTX Tensor Cores deliver up to 1,400 Tensor TFLOPS for AI inferencing. Over the last year, NVIDIA has worked to\n\nimprove DirectML performance to take full advantage of RTX hardware.\n\nOn May 24, we\u2019ll release our latest optimizations in Release 532.03 drivers that combine with Olive-optimized models to deliver big boosts in AI performance. Using an Olive-optimized version of the Stable Diffusion text-to-image generator with the popular Automatic1111 distribution, performance is improved over 2x with the new driver.\n\nWith AI coming to nearly every Windows application, efficiently delivering inference performance is critical \u2014 especially for laptops. Coming soon, NVIDIA will introduce new Max-Q low-power inferencing for AI-only workloads on RTX GPUs. It optimizes Tensor Core performance while keeping power consumption of the GPU as low as possible, extending battery life and maintaining a cool, quiet system. The GPU can then dynamically scale up for maximum AI performance when the workload demands it.\n\nJoin the PC AI Revolution Now\n\nTop software developers \u2014 like Adobe, DxO, ON1 and Topaz \u2014 have already incorporated NVIDIA AI technology with more than 400 Windows applications and games optimized for RTX Tensor Cores.\n\n\u201cAI, machine learning and deep learning power all Adobe applications and drive the future of creativity. Working with NVIDIA we continuously optimize AI model performance to deliver the best possible experience for our Windows users on RTX GPUs.\u201d \u2014 Ely Greenfield, CTO of digital media at Adobe\n\n\u201cNVIDIA is helping to optimize our WinML model performance on RTX GPUs, which is accelerating the AI in DxO DeepPRIME, as well as providing better denoising and demosaicing, faster.\u201d \u2014 Renaud Capolunghi, senior vice president of engineering at DxO\n\n\u201cWorking with NVIDIA and Microsoft to accelerate our AI models running in Windows on RTX GPUs is providing a huge benefit to our audience. We\u2019re already seeing 1.5x performance gains in our suite of AI-powered photography editing software.\u201d \u2014 Dan Harlacher, vice president of products at ON1", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMjMvbWljcm9zb2Z0LWJ1aWxkLW52aWRpYS1haS13aW5kb3dzLXJ0eC8=.pdf"}, {"question": "What resources are available for developers to test generative AI models on Windows PCs?", "gt_answer": "An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face. And a PC-optimized version of NVIDIA NeMo large language model for conversational AI is coming soon to Hugging Face.", "gt_context": "\u201cOur extensive work with NVIDIA has led to improvements across our suite of photo- and video-editing applications. With RTX GPUs, AI performance has improved drastically, enhancing the experience for users on Windows PCs.\u201d \u2014 Suraj Raghuraman, head of AI engine development at Topaz Labs\n\nNVIDIA and Microsoft are making several resources available for developers to test drive top generative AI models on Windows PCs. An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face. And a PC-optimized version of NVIDIA NeMo large language model for conversational AI is coming soon to Hugging Face.\n\nDevelopers can also learn how to optimize their applications end-to-end to take full advantage of GPU-acceleration via the NVIDIA AI for accelerating applications developer site .\n\nThe complementary technologies behind Microsoft\u2019s Windows platform and NVIDIA\u2019s dynamic AI hardware and software stack will help developers quickly and easily develop and deploy generative AI on Windows 11.\n\nMicrosoft Build runs through Thursday, May 25. Tune into to learn more on shaping the future of work with AI .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/05/23/microsoft-build-nvidia-ai-windows-rtx/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMjMvbWljcm9zb2Z0LWJ1aWxkLW52aWRpYS1haS13aW5kb3dzLXJ0eC8=.pdf"}, {"question": "What is Arteana's Art Squad?", "gt_answer": "Arteana's Art Squad is a computer graphics animated series featuring vibrant characters who use the power of art to solve the world's problems. They come together in the Junior School art classroom, where each brings unique artistic talents, knowledge, and perspective on art.", "gt_context": "\u2018Arteana\u2019s Art Squad\u2019 Assembles \u2014 Indie Showrunner Rafi Nizam Creates High-End Children\u2019s Show on a Budget\n\nNVIDIA Omniverse, USD Composer and ASUS ProArt Hardware combine with stunning effect this week \u2018In the NVIDIA Studio.\u2019\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks and demonstrates how NVIDIA Studio technology improves creative workflows. We\u2019re also deep diving into new GeForce RTX 40 Series GPU features, technologies and resources and how they dramatically accelerate content creation.\n\nRafi Nizam is an award-winning independent animator, director, character designer and more. He\u2019s developed feature films at Sony Pictures, children\u2019s series and comedies at BBC and global transmedia content at NBCUniversal.\n\nHe\u2019s also the creator of Arteana\u2019s Art Squad \u2014 a computer graphics animated series featuring vibrant characters who use the power of art to solve the world\u2019s problems. They come together in the Junior School art classroom, where each brings unique artistic talents, knowledge and perspective on art history, art therapy and in art-making.\n\nAimed at children, the series seeks to inspire viewers by portraying the characters\u2019 artistic journeys and the power of creative expression. Their adventures are meant to spark a sense of empathy by exploring the universal themes of self-doubt, social dynamics, success and failure. Underscoring the power of imagination and creative thinking is a common throughline.\n\nNizam\u2019s creative insight and unique perspective are the subjects of this week\u2019s In the NVIDIA Studio installment.\n\nThe artist recently participated in the ASUS ProArt Masters\u2019 Talks sessions program, where he demonstrated how ASUS ProArt solutions , including the NVIDIA Studio -validated ProArt Studiobook Pro 16 OLED laptop with a GeForce RTX 3060 GPU and the Scan 3XS RTX Studio workstation with NVIDIA RTX A6000 graphic cards, helped produce a high-end animated series on an indie budget.\n\nMeet Arteana, leader of the Art Squad, who possesses a keen interest in historical artists and art movements.\n\nRivette demonstrates diverse art techniques and is always looking for new ways to express her creativity.\n\nThreeDee, seen here playing the drums, is a kind and compassionate character who uses art therapy as a means of promoting well-being and healing and to uncover the underlying worries that plague the squad.\n\nThen there\u2019s Figgi, whose spontaneous performance art inspires others to redefine boundaries and embrace individuality.\n\nRounding out the squad is PuttPupp \u2014 a lovable and playful character made of putty erasers \u2014 who serves as the class pet.\n\nNizam \u2014 matching the demeanor and spirit of his work \u2014 is honest. He\u2019s not an expert at 3D modeling, nor is he a visual effects artist, and he\u2019s not the most adept at production pipelines. However, he does love to draw.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMDYvYXN1cy1wcm9hcnQtc3R1ZGlvLWxhcHRvcC1vbW5pdmVyc2Utb3BlbnVzZC8=.pdf"}, {"question": "What platform did Rafi Nizam use to build Arteana's Art Squad?", "gt_answer": "Rafi Nizam used NVIDIA Omniverse to build Arteana's Art Squad.", "gt_context": "His focus has always been on characters, storytelling and world-building. He built Arteana\u2019s Art Squad independently while working in NVIDIA Omniverse , a platform for building and connecting 3D tools and apps.\n\n\u201cSpeaking as a storyteller first and a non-technical indie creator second, I find Omniverse to be the most user-friendly and versatile way to connect the 3D apps in my workflows I\u2019ve come to rely on, and enjoy this way of working from concept to final pixel.\u201d \u2014 Rafi Nizam\n\n\u201cAs a showrunner, embarking on making a CG animated show without a crew is kind of daunting, but I\u2019m using NVIDIA Omniverse to discover ways to overcome my limitations in this space,\u201d Nizam said.\n\nNizam began by modeling each squad member and building production assets in Adobe Substance 3D Modeler using VR. He also utilized the VR app Gravity Sketch to create models for the different objects required in each set or scene.\n\n\u201cDesigning 3D character models in VR makes pre-production and look dev possible for an artist like me,\u201c he said.\n\nNizam imported his character into Autodesk Maya for the rigging process \u2014 creating a skeleton for the 3D model so that it can move.\n\nHis RTX GPU delivered AI-powered, accelerated denoising with the default Autodesk Arnold renderer, resulting in highly interactive and photorealistic renders.\n\nNizam then moved to Adobe Substance 3D Painter to create textures and materials, applying them to production assets. NVIDIA RTX-accelerated light and ambient occlusion baking optimized assets in mere seconds.\n\nNext, Nizam deployed Unreal Engine to record motion captures via a Perception Neuron suit, creating scenes and camera sequences in real time. NVIDIA DLSS technology increased the interactivity of the viewport by using AI to upscale frames rendered at lower resolution, while retaining high-fidelity detail.\n\n\u201cMotion capture fosters experimentation and spontaneous collaboration with performers capturing an abundance of movement, a luxury often untenable for indie projects,\u201d said Nizam.\n\nNVIDIA Omniverse\u2019s spatial computing capabilities took Nizam\u2019s creative workflow to the next level. The Omniverse USD Composer\u2019s native VR support enables artists to interactively assemble, light and navigate scenes in real time, individually or collaboratively, in fully ray-traced VR.\n\nHere, Nizam adjusted scene lighting and approved the overall layout in VR. He then moved to desktop to polish and refine the 3D sequences, reviewing final shots before exporting the completed project.\n\nNizam is a big proponent of Omniverse, OpenUSD and its ability to streamline 3D content creation.\n\n\u201cLess time and effort, more productivity, cost savings and simpler real-time workflows \u2014 I use Omniverse daily for these reasons,\u201d he said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMDYvYXN1cy1wcm9hcnQtc3R1ZGlvLWxhcHRvcC1vbW5pdmVyc2Utb3BlbnVzZC8=.pdf"}, {"question": "What is the foundation of the Omniverse platform?", "gt_answer": "The foundation of the Omniverse platform is OpenUSD, an open and extensible framework for describing, composing, simulating and collaborating within 3D worlds.", "gt_context": "The Omniverse platform has at its foundation OpenUSD, an open and extensible framework for describing, composing, simulating and collaborating within 3D worlds. OpenUSD unlocks Omniverse\u2019s potential by enabling movement between 3D apps \u2014 artists can transition all individual assets to their desired format with a single click.\n\n\u201cAll apps were in sync and updated on the fly while I assembled it, thanks to Omniverse being the backbone of my CG creative and production process,\u201d Nizam said.\n\n\u201cI rely on Omniverse Nucleus and Cache as the USD infrastructure for my production pipeline, allowing for seamless collaboration and facilitating cross-application workflows,\u201d Nizam said. \u201cAdditionally, I utilize various software connectors, which help bridge different apps and streamline the creative process.\u201d\n\nCheck out Nizam on Instagram .\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nGet started with NVIDIA Omniverse by downloading the free standard license or learn how Omniverse Enterprise can connect your team . Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter and follow NVIDIA Omniverse on Instagram , Medium and Twitter .\n\nFor more, join the Omniverse community and check out the Omniverse forums , Discord server , Twitch and YouTube channels.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/09/06/asus-proart-studio-laptop-omniverse-openusd/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMDYvYXN1cy1wcm9hcnQtc3R1ZGlvLWxhcHRvcC1vbW5pdmVyc2Utb3BlbnVzZC8=.pdf"}, {"question": "Which program has Verdant joined as a member?", "gt_answer": "Verdant is a member of NVIDIA Inception.", "gt_context": "Apple of My AI: Startup Sprouts Multitasking Farm Tool for Organics Verdant runs on the NVIDIA Jetson edge AI platform and relies on TAO Toolkit\u2019s transfer learning to boost model production by 5x.\n\nAuthor: Scott Martin\n\nIt all started with two software engineers and a tomato farmer on a West Coast road trip.\n\nVisiting farms to survey their needs, the three hatched a plan at an apple orchard: build a highly adaptable 3D vision AI system for automating field tasks.\n\nVerdant, based in the San Francisco Bay Area, is developing AI that promises versatile farm assistance in the form of a tractor implement for weeding, fertilizing and spraying.\n\nFounders Lawrence Ibarria, Gabe Sibley and Curtis Garner \u2014 two engineers from Cruise Automation and a tomato farming manager \u2014 are harnessing the NVIDIA Jetson edge AI platform and NVIDIA Metropolis SDKs such as TAO Toolkit and DeepStream for this ambitious slice of farm automation.\n\nThe startup, founded in 2018, is commercially deployed in carrot farms and in trials at apple, garlic, broccoli and lettuce farms in California\u2019s Central Valley and Imperial Valley, as well as in Oregon.\n\nVerdant plans to help with organic farming by lowering production costs for farmers while increasing yields and providing labor support. It employs the tractor operator, who is trained to manage the AI-driven implements. The company\u2019s robot-as-service model, or RaaS, enables farmers to see metrics on yield improvements and reductions in chemical costs, and pay by the acre for results.\n\n\u201cWe wanted to do something meaningful to help the environment,\u201d said Ibarria, Verdant\u2019s chief operating officer. \u201cAnd it\u2019s not only reducing costs for farmers, it\u2019s also increasing their yield.\u201d\n\nThe company recently landed more than $46 million in series A funding.\n\nAnother recent event at Verdant was hiring as its chief technology officer Frank Dellaert , who is recognized for using graphical models to solve large-scale mapping and 4D reconstruction challenges. A faculty member at Georgia Institute of Technology, Dellaert has led work at Skydio, Facebook Reality Labs and Google AI while on leave from the research university.\n\n\u201cOne of the things that was impressed upon me when joining Verdant was how they measure performance in real-time,\u201d remarked Dellaert. \u201cIt\u2019s a promise to the grower, but it\u2019s also a promise to the environment. It shows whether we do indeed save from all the chemicals being put into the field.\u201d\n\nVerdant is a member of NVIDIA Inception , a free program that provides startups with technical training, go-to-market support, and AI platform guidance.\n\nCompanies worldwide \u2014 Monarch Tractor , Bilberry , Greeneye , FarmWise , John Deere and many others \u2014 are building the next generation of sustainable farming with NVIDIA Jetson AI .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMTUvdmVyZGFudC1mYXJtLW9yZ2FuaWNzLWpldHNvbi1vcmluLw==.pdf"}, {"question": "What is the benefit of using the Jetson AGX Orin system-on-module in the tractor cabs?", "gt_answer": "The Jetson AGX Orin system-on-module enables Verdant to create 3D visualizations showing plant treatments for the tractor operator.", "gt_context": "Verdant is working with Bolthouse Farms, based in Bakersfield, Calif., to help its carrot-growing business transition to regenerative agriculture practices. The aim is to utilize more sustainable farming practices, including reduction of herbicides.\n\nVerdant is starting with weeding and expanding next into precision fertilizer applications for Bolthouse.\n\nThe computation and automation from Verdant have enabled Bolthouse Farms to understand how to achieve its sustainable farming goals, according to the farm\u2019s management team.\n\nVerdant is putting the Jetson AGX Orin system-on-module inside tractor cabs. The company says that Orin\u2019s powerful computing and availability with ruggedized cases from vendors makes it the only choice for farming applications. Verdant is also collaborating with Jetson ecosystem partners, including RidgeRun, Leopard Imaging and others.\n\nThe module enables Verdant to create 3D visualizations showing plant treatments for the tractor operator. The company uses two stereo cameras for its field visualizations, for inference and to gather data in the field for training models on NVIDIA DGX systems running NVIDIA A100 Tensor Core GPUs back at its headquarters. DGX performance allows Verdant to use larger training datasets to get better model accuracy in inference.\n\n\u201cWe display a model of the tractor and a 3D view of every single carrot and every single weed and the actions we are doing, so it helps customers see what the robot\u2019s seeing and doing,\u201d said Ibarria, noting this can all run on a single AGX Orin module, delivering inference at 29 frames per second in real time.\n\nVerdant relies on NVIDIA DeepStream as the framework for running its core machine learning to help power its detection and segmentation. It also uses custom CUDA kernels to do a number of tracking and positioning elements of its work.\n\nVerdant\u2019s founder and CEO, Sibley, whose post-doctorate research was in simultaneous localization and mapping has brought this expertise to agriculture. This comes in handy to help present a logical representation of the farm, said Ibarria. \u201cWe can see things, and know when and where we\u2019ve seen them,\u201d he said.\n\nThis is important for apples, he said. They can be challenging to treat, as apples and branches often overlap, making it difficult to find the best path to spray them. The 3D visualizations made possible by AGX Orin allow a better understanding of the occlusion and the right path for spraying.\n\n\u201cWith apples, when you see a blossom, you can\u2019t just spray it when you see it, you need to wait 48 hours,\u201d said Ibarria. \u201cWe do that by building a map, relocalizing ourselves saying, \u2018That\u2019s the blossom, I saw it two days ago, and so it\u2019s time to spray.\u2019\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMTUvdmVyZGFudC1mYXJtLW9yZ2FuaWNzLWpldHNvbi1vcmluLw==.pdf"}, {"question": "What technology does Verdant rely on for its model building pipeline?", "gt_answer": "Verdant relies on NVIDIA TAO Toolkit for its model building pipeline.", "gt_context": "Verdant relies on NVIDIA TAO Toolkit for its model building pipeline. The transfer learning capability in TAO Toolkit enables it to take off-the-shelf models and quickly refine them with images taken in the field. For example, this has made it possible to change from detecting carrots to detecting onions, in just a day. Previously, it took roughly five days to build models from scratch that achieved an acceptable accuracy level.\n\n\u201cOne of our goals here is to leverage technologies like TAO and transfer learning to very quickly begin to operate in new circumstances,\u201d said Dellaert.\n\nWhile cutting model building production time by 5x, the company has also been able to hit 95% precision with its vision systems using these methods.\n\n\u201cTransfer learning is a big weapon in our armory,\u201d he said.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/15/verdant-farm-organics-jetson-orin/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMTUvdmVyZGFudC1mYXJtLW9yZ2FuaWNzLWpldHNvbi1vcmluLw==.pdf"}, {"question": "What is the purpose of creating 3D models of the shipwrecks?", "gt_answer": "The purpose of creating 3D models of the shipwrecks is to allow museumgoers to explore the sunken WWII ships as if they were scuba divers on the ocean floor and to tell the story of one of Australia's greatest naval battles.", "gt_context": "See a Sea Change: 3D Researchers Bring Naval History to Life\n\nA half-million pictures of two shipwrecks will blend into lifelike VR/AR exhibits thanks to accelerated computing.\n\nAuthor: Rick Merritt\n\nMuseumgoers will be able to explore two sunken WWII ships as if they were scuba divers on the ocean floor, thanks to work at Curtin University in Perth, Australia.\n\nExhibits in development, for display in Australia and potentially further afield, will use exquisitely detailed 3D models the researchers are creating to tell the story of one of the nation\u2019s greatest naval battles.\n\nOn Nov. 19, 1941, Australia\u2019s HMAS Sydney (II) and Germany\u2019s HSK Kormoran lobbed hundreds of shells in a duel that lasted less than an hour. More than 700 died, including every sailor on the Sydney. Both ships sank 8,000 feet, 130 miles off the coast of Western Australia, not to be discovered for decades.\n\nAndrew Woods, an expert in stereoscopic 3D visualization and associate professor at Curtin, built an underwater rig with more than a dozen video and still cameras to capture details of the wrecks in 2015.\n\nAsh Doshi, a computer vision specialist and senior research officer at Curtin, is developing and running software on NVIDIA GPUs that stitches the half-million pictures and 300 hours of video they took into virtual and printed 3D models.\n\nIt\u2019s hard, pioneering work in a process called photogrammetry . Commercially available software maxes out at around 10,000 images.\n\n\u201cIt\u2019s highly computationally intensive \u2014 when you double the number of images, you quadruple the compute requirements,\u201d said Woods, who manages the Curtin HIVE , a lab with four advanced visualization systems.\n\n\u201cIt would\u2019ve taken a thousand years to process with our existing systems, even though they are fairly fast,\u201d he said.\n\nWhen completed next year, the work will have taken less than three years, thanks to systems at the nearby Pawsey Supercomputing Centre using NVIDIA V100 and prior-generation GPUs.\n\nAccelerated computing is critical because the work is iterative. Images must be processed, manipulated and then reprocessed.\n\nFor example, Woods said a first pass on a batch of 400 images would take 10 hours on his laptop. By contrast, he could run a first pass in 10 minutes on his system with two NVIDIA RTX A6000 GPUs awarded through NVIDIA\u2019s Applied Research Accelerator Program .\n\nIt would take a month to process 8,000 images on the lab\u2019s fast PCs, work the supercomputer could handle in a day. \u201cRarely would anyone in industry wait a month to process a dataset,\u201d said Woods.\n\nLocal curators can\u2019t wait to get the Sydney and Kormoran models on display. Half the comments on their Tripadvisor page already celebrate 3D films the team took of the wrecks.\n\nThe digital models will more deeply engage museumgoers with interactive virtual and augmented reality exhibits and large-scale 3D prints.\n\n\u201cThese 3D models really help us unravel the story, so people can appreciate the history,\u201d Woods said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMTgvM2Qtc2hpcHdyZWNrcy1wZXJ0aC8=.pdf"}, {"question": "What is the potential commercial use of the software developed by the researchers?", "gt_answer": "The software could find commercial uses in monitoring undersea pipelines, oil and gas rigs, and more.", "gt_context": "The exhibits are expected to tour museums in Perth and Sydney, and potentially cities in Germany and the U.K., where the ships were built.\n\nWhen the project is complete, the researchers aim to make their code available so others can turn historic artifacts on the seabed into rare museum pieces. Woods expects the software could also find commercial uses monitoring undersea pipelines, oil and gas rigs and more.\n\nOn the horizon, the researchers want to try Instant NeRF , an inverse rendering tool NVIDIA researchers developed to turn 2D images into 3D models in real time.\n\nWoods imagines using it on future shipwreck surveys, possibly running on an NVIDIA DGX System on the survey vessel. It could provide previews in near real time based on images gathered by remotely operated underwater vehicles on the ocean floor, letting the team know when it has enough data to take back for processing on a supercomputer.\n\n\u201cWe really don\u2019t want to return to base to find we\u2019ve missed a spot,\u201d said Woods.\n\nWoods\u2019 passion for 3D has its roots in the sea.\n\n\u201cI saw the movie Jaws 3D when I was a teenager, and the images of sharks exploding out of the screen are in part responsible for taking me down this path,\u201d he said.\n\nThe researchers released the video below to commemorate the 81st anniversary of the sinking of the WWII ships.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/18/3d-shipwrecks-perth/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMTgvM2Qtc2hpcHdyZWNrcy1wZXJ0aC8=.pdf"}, {"question": "What is the purpose of using NVIDIA Omniverse in Vanessa Rosa's artwork?", "gt_answer": "Vanessa Rosa uses NVIDIA Omniverse to accelerate her 3D workflows and create virtual worlds for her artwork.", "gt_context": "Meet the Omnivore: Artist Fires Up NVIDIA Omniverse to Glaze Animated Ceramics Vanessa Rosa imbues traditional ceramics with an Omniverse-animated, sci-fi twist using Audio2Face AI, Blender software and more.\n\nAuthor: Kristen Yee\n\nEditor\u2019s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.\n\nVanessa Rosa\u2019s art transcends time: it merges traditional and contemporary techniques, gives new life to ancient tales and imagines possible futures.\n\nThe U.S.-based 3D artist got her start creating street art in Rio de Janeiro, where she grew up. She\u2019s since undertaken artistic tasks like painting murals for Le Centre in Cotonou, Benin, and publishing children\u2019s books.\n\nNow, her focus is on using NVIDIA Omniverse \u2014 a platform for connecting and building custom 3D pipelines \u2014 to create what she calls Little Martians , a sci-fi universe in which ceramic humanoids discuss theories related to the past, present and future of humanity.\n\nTo kick-start the project, Rosa created the most primitive artwork that she could think of: mask-like ceramics, created with local clay and baked with traditional in-ground kilns.\n\nThen, in a sharply modern twist, she 3D scanned them with applications like Polycam and Regard3D. And to animate them, she recorded herself narrating stories with the motion-capture app Face Cap \u2014 as well as generated AI voices from text and used the Omniverse Audio2Face app to create facial animations.\n\nPrior to the Little Martians project, Rosa seldom relied on technology for her artwork. Only recently did she switch from her laptop to a desktop computer powered by an NVIDIA RTX 5000 GPU , which significantly cut her animation render times.\n\nOmniverse quickly became the springboard for Rosa\u2019s digital workflow.\n\n\u201cI\u2019m new to 3D animation, so NVIDIA applications made it much easier for me to get started rather than having to learn how to rig and animate characters solely in software,\u201d she said. \u201cThe power of Omniverse is that it makes 3D simulations accessible to a much larger audience of creators, rather than just 3D professionals.\u201d\n\nAfter generating animations and voice-overs with Omniverse, she employed an add-on for Blender called Faceit that accepts .json files from Audio2Face.\n\n\u201cThis has greatly improved my workflow, as I can continue to develop my projects on Blender after generating animations with Omniverse,\u201d she said.\n\nAt the core of Omniverse is Universal Scene Description \u2014 an open-source, extensible 3D framework and common language for creating virtual worlds. With USD, creators like Rosa can work with multiple applications and extensions all on a centralized platform, further streamlining workflows.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMjkvb21uaXZlcnNlLWNyZWF0b3ItdmFuZXNzYS1yb3NhLw==.pdf"}, {"question": "What does Rosa plan to use Omniverse for in the future?", "gt_answer": "In the future, Rosa plans to use Omniverse to create more interactive media, such as avatars out of her ceramics.", "gt_context": "Considering herself a beginner in 3D animation, Rosa feels she\u2019s \u201conly scratched the surface of what\u2019s possible with Omniverse.\u201d In the future, she plans to use the platform to create more interactive media.\n\n\u201cI love that with this technology, pieces can exist in the physical world, but gain new life in the digital world,\u201d she said. \u201cI\u2019d like to use it to create avatars out of my ceramics, so that a person could interact\n\nwith it and talk to it using an interface.\u201d\n\nWith Little Martians , Rosa hopes to inspire her audience to think about the long processes of history \u2014 and empower artists that use traditional techniques to explore the possibilities of design and simulation technology like Omniverse.\n\n\u201cI am always exploring new techniques and sharing my process,\u201d she said. \u201cI believe my work can help other people who love the traditional fine arts to adapt to the digital world.\u201d\n\nCreators and developers across the world can download NVIDIA Omniverse for free , and enterprise teams can use the platform for their 3D projects.\n\nLearn more about NVIDIA\u2019s latest AI breakthroughs powering graphics and virtual worlds at GTC , running online Sept. 19-22. Attend the top sessions for 3D creators and developers to learn more about how Omniverse can accelerate workflows, and join the NVIDIA Omniverse User Group to connect with other artists. Register free now.\n\nCheck out artwork from other \u201cOmnivores\u201d and submit projects in the gallery . Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more .\n\nFollow NVIDIA Omniverse on Instagram , Twitter , YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums , and join our Discord server and Twitch channel to chat with the community.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/08/29/omniverse-creator-vanessa-rosa/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMjkvb21uaXZlcnNlLWNyZWF0b3ItdmFuZXNzYS1yb3NhLw==.pdf"}, {"question": "What are NVIDIA's most popular videos of 2022?", "gt_answer": "NVIDIA's most popular videos of 2022 put spotlights on photorealistically animated data centers, digital twins for climate science, AI for healthcare, and more.", "gt_context": "AI\u2019s Highlight Reel: Top 5 NVIDIA Videos of 2022\n\nAuthor: Angie Lee\n\nIf AI had a highlight reel, the NVIDIA YouTube channel might just be it.\n\nThe channel showcases the latest breakthroughs in artificial intelligence, with demos, keynotes and other videos that help viewers see and believe the astonishing ways in which the technology is changing the world.\n\nNVIDIA\u2019s most popular videos of 2022 put spotlights on photorealistically animated data centers, digital twins for climate science, AI for healthcare and more.\n\nAnd the latest GTC keynote address by NVIDIA founder and CEO Jensen Huang racked up 19 million views in just three months, making it the channel\u2019s most-watched video of all time.\n\nIt all demonstrates the power of AI, its growth and applications.\n\nBut don\u2019t just take our word for it \u2014 watch NVIDIA\u2019s top five YouTube videos of the year:\n\nWhile watching graphics cards dance and autonomous vehicles cruise, learn more about how NVIDIA\u2019s body of work is fueling all things AI.\n\nIn a dazzling clip that unpacks NVIDIA DGX A100 , the universal system for AI workloads, check out the many applications for the world\u2019s first 5 petaFLOPS AI system.\n\nWatch stunning demos and hear about how the NVIDIA Omniverse platform enables real-time 3D simulation, design collaboration and the creation of virtual worlds.\n\nA collaboration including NVIDIA led to a record-breaking AI technique where a whole genome was sequenced in just about seven hours.\n\nDive into how Siemens Gamesa is using NVIDIA-powered, physics-informed, super-resolution AI models to simulate wind farms and boost energy production.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/12/16/top-five-nvidia-ai-videos/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTIvMTYvdG9wLWZpdmUtbnZpZGlhLWFpLXZpZGVvcy8=.pdf"}, {"question": "What is the purpose of the Berlin Summit for the Earth Virtualization Engines initiative?", "gt_answer": "The purpose of the Berlin Summit is to bring together scientists and technologists to harness AI and high-performance computing to create climate information systems of the future.", "gt_context": "AI, Digital Twins to Unleash Next Wave of Climate Research Innovation\n\nNVIDIA CEO Jensen Huang presented a keynote on the topic at the Berlin Summit for the Earth Virtualization Engines initiative.\n\nAuthor: Karthik Kashinath\n\nAI and accelerated computing will enable breakthroughs in climate science, NVIDIA founder and CEO Jensen Huang said during a keynote Monday at the Berlin Summit for the Earth Virtualization Engines initiative.\n\n\u201cRichard Feynman once said that \u2018what I can\u2019t create, I don\u2019t understand,\u2019 and that\u2019s the reason why climate modeling is so important,\u201d Huang told 180 attendees at the Harnack House in Berlin, a storied gathering place for the region\u2019s scientific and research community.\n\n\u201cAnd so the work that you do is vitally important to policymakers, researchers, and the industry,\u201d he added.\n\nTo advance this work, the Berlin Summit brings together scientists and technologists from around the globe to harness AI and high-performance computing to create climate information systems of the future.\n\nIn his talk, Huang outlined three miracles that will have to happen for attendees of the Berlin Summit to achieve their goal of creating Earth Virtualization Engines (EVE). He also described NVIDIA\u2019s efforts in this direction through Earth-2 , a highly collaborative effort with the climate science community to create Earth digital twins.\n\nThe first miracle required will be to simulate the climate fast enough and with a high enough resolution\u2013 at kilometer-scale\u2013 to predict impacts at local granularity.\n\nThe second miracle required will be to emulate the physics of climate at high enough fidelity using AI. Generative AI breakthroughs promise new ways of predicting Earth\u2019s climate and enabling real-time interactivity with petabytes of climate data. More importantly, AI is the technology that will help create actionable climate information from raw climate data in myriad ways, unlocking the potential of vast quantities of data to inform decision-making.\n\nThe third miracle needed is the ability to virtualize massive data interactively with NVIDIA Omniverse to \u201cput it in the hands of policymakers, businesses, companies, and scientists.\u201d\n\nEVE is an international collaboration that brings together digital infrastructure focused on climate science, HPC and AI aiming to provide, for the first time, easily accessible kilometer-scale climate information to manage the planet sustainably.\n\n\u201cThe reason why Earth-2 and EVE found each other at the perfect time is because the premise of Earth-2 is also based on these three miracles\u201d, Huang said.\n\nThe EVE initiative promises to accelerate the pace of advances, advocating coordinated climate projections at kilometer-scale resolution. It\u2019s an enormous challenge, but it builds on a huge base of advancements over the past 25 years.\n\nA sprawling suite of applications already benefits from accelerated computing\u2013 including ICON, IFS, NEMO, MPAS, WRF-G\u2013 and much more computing power for such applications is coming.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMDMvY2xpbWF0ZS1yZXNlYXJjaC1uZXh0LXdhdmUv.pdf"}, {"question": "What does the NVIDIA Modulus framework do?", "gt_answer": "NVIDIA Modulus is an open-source framework for designing, training, and fine-tuning physics-informed machine learning models.", "gt_context": "The NVIDIA GH200 Grace Hopper Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing applications. It delivers up to 10x higher\n\nperformance for applications running terabytes of data.\n\nIt\u2019s built to scale, and by connecting large numbers of these chips together, NVIDIA can offer systems with the power efficiency to accelerate climate research. \u201cTo the software, it looks like one giant processor,\u201d Huang said.\n\nTo help researchers put vast quantities of data to work quickly to unlock understanding, Huang spoke about NVIDIA Modulus , an open-source framework for designing, training and fine-tuning physics-informed machine learning models, and FourCastNet , an Earth system emulator and predictor that can learn physics from real-world data.\n\nUsing data alone, Huang showed how FourCastNet can learn the physical principles governing complex weather patterns and accurately predict the thermodynamic structure of storms.\n\nA novel application of AI emulators is to tether them to principled physics-based simulations to achieve two goals: rapid exploration and massive compression. FourCastNet can generate many possible weather trajectories when tethered to \u201ccheckpoints\u201d created by a climate simulation in seconds to minutes.\n\nThis enables rapid interactive exploration of massive ensembles of possible trajectories at high fidelity and provides massive data compression. The longer the distance between checkpoints, the larger the compression achieved.\n\nFourCastNet today can tether between checkpoints spaced a month apart, achieving 700x data compression. Huang demonstrated FourCastNet tethering for cities across the globe, including Berlin, Tokyo and Buenos Aires.\n\nHuang then demonstrated how FourCastNet-generated large ensembles anticipated an unprecedented North African heatwave from 2018. By running FourCastNet in Modulus, the Earth-2 team could generate a thousand different 21-day weather trajectories in one-tenth the time it previously took to do a single trajectory \u2014 and with 1,000x less energy consumption. Such massive ensembles are required to quantify the risk of rare, high-impact extreme weather events.\n\nLastly, NVIDIA technologies promise to help all this data and knowledge become more accessible, interactive, and useful with digital twins of increasingly complex systems \u2013 from Amazon warehouses to the way 5G signals propagate in dense urban environments.\n\nHuang then showed a stunning, high-resolution interactive visualization of global-scale climate data in the cloud, zooming in from a view of the globe to a street-level detailed view of Berlin. This approach can enable interactive exploration of climate information across the globe, Huang said.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMDMvY2xpbWF0ZS1yZXNlYXJjaC1uZXh0LXdhdmUv.pdf"}, {"question": "What are the three miracles that Huang outlined?", "gt_answer": "Huang outlined three miracles: building more powerful computing systems for simulating physical systems at ultra-high resolution, training large-scale AI models, and interactive visualization of petabytes of data.", "gt_context": "To make the three miracles outlined above a reality, Huang showed how NVIDIA is building more powerful computing systems for simulating physical systems at ultra-high resolution, training large-scale AI models, and interactive visualization of petabytes of data.\n\n\u201cThese new types of supercomputers are just coming online,\u201d Huang said. \u201cThis is as fresh a computing technology as you can imagine.\u201d Huang ended his talk by thanking the world-leading scientists who congregated at the Berlin Summit and playfully suggesting a mission statement for EVE .\n\n\u201cEarth, the final frontier, these are the voyages of EVE,\u201d Huang said. \u201cIts mission is to push the limits of computing in service of climate modeling, to seek out new methods and technologies to study the global-to-local state of the climate to inform today the impact of mitigation and adaptation to Earth\u2019s tomorrow, to boldly go where no one has gone before.\u201d\n\nLearn more about Earth-2 .\n\nDiscover how AI is powering the future of clean energy .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/07/03/climate-research-next-wave/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMDMvY2xpbWF0ZS1yZXNlYXJjaC1uZXh0LXdhdmUv.pdf"}, {"question": "What are some examples of how generative AI can be applied in scientific research?", "gt_answer": "Generative AI can be used to understand the language of genomes and predict dangerous coronavirus variants for drug and vaccine research. It can also predict extreme weather events like hurricanes or heat waves.", "gt_context": "Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges Top NVIDIA researcher speaks on generative AI providing opportunities to get ahead of the curve on challenges like drug development and extreme weather.\n\nAuthor: Kristen Yee\n\nGenerative AI-based models can not only learn and understand natural languages \u2014 they can learn the very language of nature itself, presenting new possibilities for scientific research.\n\nAnima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President\u2019s Council of Advisors on Science and Technology .\n\nAt the talk, Anandkumar said that generative AI was described as \u201can inflection point in our lives,\u201d with discussions swirling around how to \u201charness it to benefit society and humanity through scientific applications.\u201d\n\nOn the latest episode of NVIDIA\u2019s AI Podcast , host Noah Kravitz spoke with Anandkumar on generative AI\u2019s potential to make splashes in the scientific community.\n\nIt can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research.\n\nGenerative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns.\n\n\u201cThose are the aspects we\u2019re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, \u2018How do we capture the multitude of scales present in the natural world?\u2019\u201d she said. \u201cWith the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?\u201d\n\nAnandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications.\n\nShe also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved.\n\n\u201cThis is the research advice I give to everyone: the most important thing is the question, not the answer,\u201d she said.\n\nJules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb \u2014 right down to the finger motions \u2014 with their minds.\n\nOverjet\u2019s Ai Wardah Inam on Bringing AI to Dentistry Overjet, a member of NVIDIA Inception , is moving fast to bring AI to dentists\u2019 offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.\n\nImmunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTMvYW5pbWEtYW5hbmRrdW1hci1nZW5lcmF0aXZlLWFpLw==.pdf"}, {"question": "Where can I listen to the AI Podcast?", "gt_answer": "You can listen to the AI Podcast on Amazon Music, iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher, and TuneIn.", "gt_context": "The AI Podcast is now available through Amazon Music .\n\nIn addition, get the AI Podcast through iTunes , Google Podcasts , Google Play , Castbox , DoggCatcher, Overcast , PlayerFM , Pocket Casts, Podbay , PodBean , PodCruncher, PodKicker, Soundcloud , Spotify , Stitcher and TuneIn .\n\nMake the AI Podcast better. Have a few minutes to spare? Fill out this listener survey .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/09/13/anima-anandkumar-generative-ai/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTMvYW5pbWEtYW5hbmRrdW1hci1nZW5lcmF0aXZlLWFpLw==.pdf"}, {"question": "What is the purpose of VMware Private AI Foundation with NVIDIA?", "gt_answer": "The purpose of VMware Private AI Foundation with NVIDIA is to enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.", "gt_context": "VMware and NVIDIA Unlock Generative AI for Enterprises\n\nNew VMware Private AI Foundation With NVIDIA Enables Enterprises to Ready Their Businesses for Generative AI; Platform to Further Support Data Privacy, Security and Control\n\nVMware Explore\u2014VMware Inc. (NYSE: VMW) and NVIDIA (NASDAQ: NVDA) today announced the expansion of their strategic partnership to ready the hundreds of thousands of enterprises that run on VMware\u2019s cloud infrastructure for the era of generative AI.\n\nVMware Private AI Foundation with NVIDIA will enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search and summarization. The platform will be a fully integrated solution featuring generative AI software and accelerated computing from NVIDIA, built on VMware Cloud Foundation and optimized for AI.\n\n\u201cGenerative AI and multi-cloud are the perfect match,\u201d said Raghu Raghuram, CEO, VMware. \u201cCustomer data is everywhere \u2014 in their data centers, at the edge, and in their clouds. Together with NVIDIA, we\u2019ll empower enterprises to run their generative AI workloads adjacent to their data with confidence while addressing their corporate data privacy, security and control concerns.\u201d\n\n\u201cEnterprises everywhere are racing to integrate generative AI into their businesses,\u201d said Jensen Huang, founder and CEO, NVIDIA. \u201cOur expanded collaboration with VMware will offer hundreds of thousands of customers \u2014 across financial services, healthcare, manufacturing and more \u2014 the full-stack software and computing they need to unlock the potential of generative AI using custom applications built with their own data.\u201d\n\nFull-Stack Computing to Supercharge Generative AI To achieve business benefits faster, enterprises are seeking to streamline development, testing and deployment of generative AI applications. McKinsey estimates that generative AI could add up to $4.4 trillion annually to the global economy.(1)\n\nVMware Private AI Foundation with NVIDIA will enable enterprises to harness this capability, customizing large language models; producing more secure and private models for their internal usage; and offering generative AI as a service to their users; and, more securely running inference workloads at scale.\n\nThe platform is expected to include integrated AI tools to empower enterprises to run proven models trained on their private data in a cost-efficient manner. To be built on VMware Cloud Foundation and NVIDIA AI Enterprise software, the platform\u2019s expected benefits will include:", "document": "Vk13YXJlIDgvMjIvMjMucGRm.pdf"}, {"question": "What framework does NeMo use for deploying generative AI in production?", "gt_answer": "NeMo uses TensorRT for Large Language Models (TRT-LLM).", "gt_context": "Privacy \u2014 Will enable customers to easily run AI services adjacent to wherever they have data with an architecture that preserves data privacy and enable secure access. Choice \u2014 Enterprises will have a wide choice in where to build and run their models \u2014 from NVIDIA NeMo\u2122 to Llama 2 and beyond \u2014 including leading OEM hardware configurations and, in the future, on public cloud and service provider offerings. Performance \u2014 Running on NVIDIA accelerated infrastructure will deliver performance equal to and even exceeding bare metal in some use cases, as proven in recent industry benchmarks. Data-Center Scale \u2014 GPU scaling optimizations in virtualized environments will enable AI workloads to scale across up to 16 vGPUs/GPUs in a single virtual machine and across multiple nodes to speed generative AI model fine-tuning and deployment. Lower Cost \u2014 Will maximize usage of all compute resources across, GPUs, DPUs and CPUs to lower overall costs, and create a pooled resource environment that can be shared efficiently across teams. Accelerated Storage \u2014 VMware vSAN Express Storage Architecture will provide performance-optimized NVMe storage and supports GPUDirect\u00ae storage over RDMA, allowing for direct I/O transfer from storage to GPUs without CPU involvement. Accelerated Networking \u2014 Deep integration between vSphere and NVIDIA NVSwitch\u2122 technology will further enable multi-GPU models to execute without inter-GPU bottlenecks. Rapid Deployment and Time to Value \u2014 vSphere Deep Learning VM images and image repository will enable fast prototyping capabilities by offering a stable turnkey solution image that includes frameworks and performance- optimized libraries pre-installed.\n\nThe platform will feature NVIDIA NeMo, an end-to-end, cloud-native framework included in NVIDIA AI Enterprise \u2014 the operating system of the NVIDIA AI platform \u2014 that allows enterprises to build, customize and deploy generative AI models virtually anywhere. NeMo combines customization frameworks, guardrail toolkits, data curation tools and pretrained models to offer enterprises an easy, cost-effective and fast way to adopt generative AI.\n\nFor deploying generative AI in production, NeMo uses TensorRT for Large Language Models (TRT-LLM), which accelerates\n\nand optimizes inference performance on the latest LLMs on NVIDIA GPUs. With NeMo, VMware Private AI Foundation with NVIDIA will enable enterprises to pull in their own data to build and run custom generative AI models on VMware\u2019s hybrid cloud infrastructure.\n\nAt VMware Explore 2023, NVIDIA and VMware will highlight how developers within enterprises can use the new NVIDIA AI Workbench to pull community models, like Llama 2, available on Hugging Face, customize them remotely and deploy production-grade generative AI in VMware environments.", "document": "Vk13YXJlIDgvMjIvMjMucGRm.pdf"}, {"question": "What companies will be supporting VMware Private AI Foundation with NVIDIA?", "gt_answer": "Dell Technologies, Hewlett Packard Enterprise, and Lenovo will be among the companies supporting VMware Private AI Foundation with NVIDIA.", "gt_context": "Broad Ecosystem Support for VMware Private AI Foundation With NVIDIA VMware Private AI Foundation with NVIDIA will be supported by Dell Technologies, Hewlett Packard Enterprise and Lenovo \u2014 which will be among the first to offer systems that supercharge enterprise LLM customization and inference workloads with NVIDIA L40S GPUs, NVIDIA BlueField\u00ae-3 DPUs and NVIDIA ConnectX\u00ae-7 SmartNICs.\n\nThe NVIDIA L40S GPU enables up to 1.2x more generative AI inference performance and up to 1.7x more training performance compared with the NVIDIA A100 Tensor Core GPU.\n\nNVIDIA BlueField-3 DPUs accelerate, offload and isolate the tremendous compute load of virtualization, networking, storage, security and other cloud-native AI services from the GPU or CPU.\n\nNVIDIA ConnectX-7 SmartNICs deliver smart, accelerated networking for data center infrastructure to boost some of the world\u2019s most demanding AI workloads.\n\nVMware Private AI Foundation with NVIDIA builds on the companies\u2019 decade-long partnership. Their co-engineering work optimized VMware\u2019s cloud infrastructure to run NVIDIA AI Enterprise with performance comparable to bare metal. Mutual customers further benefit from the resource and infrastructure management and flexibility enabled by VMware Cloud Foundation.\n\nAvailability VMware intends to release VMware Private AI Foundation with NVIDIA in early 2024.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nAbout VMware VMware is a leading provider of multi-cloud services for all apps, enabling digital innovation with enterprise control. As a trusted foundation to accelerate innovation, VMware software gives businesses the flexibility and choice they need to build the future. Headquartered in Palo Alto, California, VMware is committed to building a better future through the company\u2019s 2030 Agenda. For more information, please visit www.vmware.com/company.\n\n1. \u201cThe economic potential of generative AI: The next productivity frontier,\u201d McKinsey, 2023", "document": "Vk13YXJlIDgvMjIvMjMucGRm.pdf"}, {"question": "What are some of the products mentioned in the press release?", "gt_answer": "Some of the products mentioned in the press release are NVIDIA AI Enterprise, NVIDIA NeMo, TensorRT, NVIDIA L40S GPUs, NVIDIA BlueField-3 DPUs, and NVIDIA ConnectX-7 SmartNICs.", "gt_context": "1. \u201cThe economic potential of generative AI: The next productivity frontier,\u201d McKinsey, 2023\n\nCertain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features and availability of our products and technologies, including NVIDIA AI Enterprise, NVIDIA NeMo, TensorRT, NVIDIA L40S GPUs, NVIDIA BlueField-3 DPUs, and NVIDIA ConnectX-7 SmartNICs; NVIDIA\u2019s partnership with VMware, including the benefits, impact, features, and availability of the VMware Private AI Foundation with NVIDIA platform; enterprises everywhere racing to integrate generative AI into their businesses; NVIDIA\u2019s expanded collaboration with VMware offering hundreds of thousands of customers \u2014 across financial services, healthcare, manufacturing and more \u2014 the full-stack software and computing they need to unlock the potential of generative AI using custom applications built with their own data; estimates that generative AI could add up to $4.4 trillion annually to the global economy; and broad ecosystem support for VMware Private AI Foundation with NVIDIA and third parties supporting the platform are forward- looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward- looking statements to reflect future events or circumstances.\n\nMany of the products and features described herein remain in various stages and will be offered on a when-and-if-available", "document": "Vk13YXJlIDgvMjIvMjMucGRm.pdf"}, {"question": "What is the purpose of the statements above?", "gt_answer": "The purpose of the statements above is to clarify that they are not a commitment, promise, or legal obligation, and that the development, release, and timing of any features or functionalities described for NVIDIA's products can change at their discretion.", "gt_context": "basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, ConnectX and NeMo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. VMware and Explore are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in the United States and other jurisdictions. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice. This press release may contain hyperlinks to non-VMware websites that are created and maintained by third parties who are solely responsible for the content on such websites.\n\nShannon McPhee NVIDIA Corporation +1-310-920-9642 smcphee@nvidia.com Eloy Ontiveros VMware +1-650-427-6145 eontiveros@vmware.com", "document": "Vk13YXJlIDgvMjIvMjMucGRm.pdf"}, {"question": "What is the concept behind the 'Black Math' video by Michael Wartella?", "gt_answer": "The concept behind the 'Black Math' video is to give a fresh look to the 20-year-old song while maintaining the style of classic White Stripes videos.", "gt_context": "Rock \u2018n\u2019 Robotics: The White Stripes\u2019 AI-Assisted Visual Symphony\n\nAuthor: Brian Caulfield\n\nPlayfully blending art and technology, underground animator Michael Wartella has teamed up with artificial intelligence to breathe new life into The White Stripes\u2019 fan-favorite song, \u201cBlack Math.\u201d\n\nThe video was released earlier this month to celebrate the 20th anniversary of the groundbreaking \u201cElephant\u201d album.\n\nWartella is known for his genre-bending work as a cartoonist and animator.\n\nHis Brooklyn-based Dream Factory Animation studio produced the \u201cBlack Math\u201d video, which combines digital and practical animation techniques with AI-generated imagery.\n\n\u201cThis track is 20 years old, so we wanted to give it a fresh look, but we wanted it to look like it was cut from the same cloth as classic White Stripes videos,\u201d Wartella said.\n\nFor the \u201cBlack Math\u201d video, Wartella turned to Automatic1111, an open-source generative AI tool. To create the video, Wartella and his team started off with the actual album cover, using AI to \u201cbore\u201d into the image.\n\nThey then used AI to train the AI and build more images in a similar style. \u201cThat was really crazy and interesting and everything built from there,\u201d Wartella said.\n\nThis image-to-image deep learning model caused a sensation on its release last year, and is part of a new generation of AI tools that are transforming the arts.\n\n\u201cWe used several different AI tools and animation tools,\u201d Wartella said. \u201cFor every shot, I wanted this to look like an AI video in a way those classic CGI videos look very CGI now.\u201d\n\nWartella and his team relied heavily on archived images and video of the musician duo as well as motion-capture techniques to create a video replicating the feel of late-1990s and early-2000s music videos.\n\nWartella has long relied on NVIDIA GPUs to run a full complement of digital animation tools on workstations from Austin, Texas-based BOXX Technologies.\n\n\u201cWe\u2019ve used BOXX workstations with NVIDIA cards for almost 20 years now,\u201d he said. \u201cThat combination is just really powerful \u2014 it\u2019s fast, it\u2019s stable.\u201d\n\nWartella describes his work on the \u201cBlack Math\u201d video as a \u201ccollaboration\u201d with the AI tool, using it to generate images, tweaking the results and then returning to the technology for more.\n\n\u201cI see this as a collaboration, not just pressing a button. It\u2019s an incredibly creative tool,\u201d Wartella said of generative AI .\n\nThe results were sometimes \u201ckind of strange,\u201d a quality that Wartella prizes.\n\nHe took the output from the AI, ran it through conventional composition and editing tools, and then processed the results through AI again.\n\nWartella felt that working with AI in this way made the video stronger and more abstract.\n\nThe video presents Jack and Meg White in their 2003 personas, emerging from a whimsical, dark cyber fantasy.\n\nThe video parallels the look and feel of the band\u2019s videos from the early 2000s, even as it leans into the otherworldly, almost kaleidoscopic qualities of modern generative AI.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMjgvd2hpdGUtc3RyaXBlcy8=.pdf"}, {"question": "What is the significance of AI in the creative industry?", "gt_answer": "AI is increasingly being used by creatives, including artists like Wartella, to explore its potential in their work. Tools like Midjourney, OpenAI\u2019s Dall\u00b7E, DreamStudio, and Stable Diffusion have facilitated the creation of AI-generated art.", "gt_context": "\u201cThe lyrics are anti-authoritarian and punkish, so the sound steered this one in the direction,\u201d Wartella said. \u201cThe song itself has a scientific theme that is already a perfect fit for the AI.\u201d\n\nWhen \u201cBlack Math\u201d was first released as part of The White Stripes\u2019 critically acclaimed \u201cElephant\u201d album, it grabbed attention for its high-energy, powerful guitar riffs and Jack White\u2019s unmistakable vocals.\n\nThe song played a role in cementing the band\u2019s reputation as a critical player in the garage rock revival of the early 2000s.\n\nWartella\u2019s inventive approach with \u201cBlack Math\u201d highlights the growing use of AI \u2014 as well as lively discussion of its implications \u2014 among creatives.\n\nOver the past few months, AI-generated art has been increasingly prevalent across various social media platforms, thanks to tools like Midjourney, OpenAI\u2019s Dall\u00b7E, DreamStudio and Stable Diffusion.\n\nAs AI advances, Wartella said, we can expect to see more artists exploring the potential of these tools in their work.\n\n\u201cI\u2019m in full favor of people having the opportunity to play around with the technology,\u201d Wartella said. \u201cWe\u2019ll definitely use AI again if the song or the project calls for it.\u201d\n\nThe release of the \u201cBlack Math\u201d music video coincides with the launch of \u201cThe White Stripes Elephant (20th Anniversary)\u201d deluxe vinyl reissue package, available now through Jack White\u2019s Third Man Records and Sony Legacy Recordings.\n\nWatch the \u201cBlack Math\u201d music video:\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/04/28/white-stripes/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMjgvd2hpdGUtc3RyaXBlcy8=.pdf"}, {"question": "What are the benefits of the GeForce RTX 4060 family?", "gt_answer": "The GeForce RTX 4060 family brings massive creator benefits, including hardware acceleration for 3D, video, and AI workflows, optimizations for popular creative apps, and exclusive Studio apps.", "gt_context": "Beyond Fast: GeForce RTX 4060 GPU Family Gives Creators More Options to Accelerate Workflows, Starting at $299\n\nPlus, D5 Render software adds NVIDIA DLSS 3, \u2018Into the Omniverse\u2019 launches and NVIDIA artist Daniel Barnes shares his wormhole animation this week \u2018In the NVIDIA Studio.\u2019\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We\u2019re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.\n\nThe GeForce RTX 4060 family will be available starting next week, bringing massive creator benefits to the popular 60-class GPUs.\n\nThe latest GPUs in the 40 Series come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse , Broadcast and Canvas .\n\nReal-time ray-tracing renderer D5 Render introduced support for NVIDIA DLSS 3 technology, enabling super smooth real-time rendering experiences, so creators can work with larger scenes without sacrificing speed or interactivity.\n\nPlus, the new Into the Omniverse series highlights the latest advancements to NVIDIA Omniverse , a platform furthering the evolution of the metaverse with the OpenUSD framework. The series showcases how artists, developers and enterprises can use the open development platform to transform their 3D workflows. The first installment highlights an update coming soon to the Adobe Substance 3D Painter Connector.\n\nIn addition, NVIDIA 3D artist Daniel Barnes returns this week In the NVIDIA Studio to share his mesmerizing, whimsical animation, Wormhole 00527 .\n\nThe GeForce RTX 4060 family is powered by the ultra-efficient NVIDIA Ada Lovelace architecture with fourth-generation Tensor Cores for AI content creation, third-generation RT Cores and compatibility with DLSS 3 for ultra-fast 3D rendering, as well as the eighth-generation NVIDIA encoder (NVENC), now with support for AV1.\n\n3D modelers can build and edit realistic 3D models in real time, up to 45% faster than the previous generation, thanks to third-generation RT Cores, DLSS 3 and the NVIDIA Omniverse platform.\n\nVideo editors specializing in Adobe Premiere Pro, Blackmagic Design\u2019s DaVinci Resolve and more have at their disposal a variety of AI-powered effects, such as auto-reframe, magic mask and depth estimation. Fourth-generation Tensor Cores seamlessly hyper-accelerate these effects, so creators can stay in their flow states.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTgvZ2Vmb3JjZS1ydHgtNDA2MC10aS8=.pdf"}, {"question": "What is the purpose of DLSS 3?", "gt_answer": "DLSS 3 multiplies frame rates in popular 3D apps to provide a vastly improved real-time experience for architects, designers, interior designers, and 3D artists.", "gt_context": "Broadcasters can jump into next-generation livestreaming with the eighth-generation NVENC with support for AV1. The new encoder is 40% more efficient, making livestreams appear as if there were a 40% increase in bitrate \u2014 a big boost in image quality that enables 4K streaming on apps like OBS Studio and platforms such as YouTube and Discord.\n\nNVENC boasts the most efficient hardware encoding available, providing significantly better quality than other GPUs. At the same bitrate, images will look better, sharper and have less artifacts, like in the\n\nexample above.\n\nCreators are embracing AI en masse. DLSS 3 multiplies frame rates in popular 3D apps. ON1 ResizeAI, software that enables high-quality photo enlargement, is sped up 24% compared with last-generation hardware. DaVinci Resolve\u2019s AI Magic Mask feature saves video editors considerable time automating the highly manual process of rotoscoping, carried out 20% faster than the previous generation.\n\nThe GeForce RTX 4060 Ti (8GB) will be available starting Wednesday, May 24, at $399. The GeForce RTX 4060 Ti (16GB) will be available in July, starting at $499. GeForce RTX 4060 will also be available in July, starting at $299.\n\nVisit the Studio Shop for GeForce RTX 4060-powered NVIDIA Studio systems when available, and explore the range of high-performance Studio products.\n\nD5 Render adds support for NVIDIA DLSS 3, bringing a vastly improved real-time experience to architects, designers, interior designers and 3D artists.\n\nSuch professionals want to navigate scenes smoothly while editing, and demonstrate their creations to clients in the highest quality. Scenes can be incredibly detailed and complex, making it difficult to maintain high real-time viewport frame rates and present in original quality.\n\nD5 is coveted by many artists for its global illumination technology, called D5 GI, which delivers high-quality lighting and shading effects in real time, without sacrificing workflow efficiency.\n\nBy integrating DLSS 3, which combines AI-powered DLSS Frame Generation and Super Resolution technologies, real-time viewport frame rates increase up to 3x, making creator experiences buttery smooth. This allows designers to deal with larger scenes, higher-quality models and textures \u2014 all in real time \u2014 while maintaining a smooth, interactive viewport.\n\nLearn more about the update .\n\nNVIDIA Omniverse is a key component of the NVIDIA Studio platform and the future of collaborative 3D content creation.\n\nA new monthly blog series, Into the Omniverse , showcases how artists, developers and enterprises can transform their creative workflows using the latest Omniverse advancements.\n\nThis month, 3D creators across industries are set to benefit from the pairing of Omniverse and the Adobe Substance 3D suite of creative tools.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTgvZ2Vmb3JjZS1ydHgtNDA2MC10aS8=.pdf"}, {"question": "What software did Daniel Barnes use for further refining his model?", "gt_answer": "Daniel Barnes imported a primitive model into Autodesk Maya for further refinement.", "gt_context": "An upcoming update to the Omniverse Connector for Adobe Substance 3D Painter will dramatically increase flexibility for users, with new capabilities including an export feature using Universal Scene Description (OpenUSD) , an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.\n\nFind details in the blog and check in every month for more Omniverse news .\n\nNVIDIA 3D artist Daniel Barnes has a simple initial approach to his work: sketch until something seems cool enough to act on. While his piece Wormhole 00527 was no exception to this usual process, an emotional component made a significant impact on it.\n\n\u201cAfter the pandemic and various global events, I took even more interest in spaceships and escape pods,\u201d said Barnes. \u201cIt was just an abstract form of escapism that really played on the idea of \u2018get me out of here,\u2019 which I think we all experienced at one point, being inside so much.\u201d\n\nBarnes imagined Wormhole 00527 to comprise each blur one might pass by as an alternate star system \u2014 a place on the other side of the galaxy where things are really similar but more peaceful, he said. \u201cAn alternate Earth of sorts,\u201d the artist added.\n\nSculpting on his tablet one night in the Nomad app, Barnes imported a primitive model into Autodesk Maya for further refinement. He retopologized the scene, converting high-resolution models into much smaller files that can be used for animation.\n\n\u201cI\u2019ve been creating in 3D for over a decade now, and GeForce RTX graphics cards have been able to power multiple displays smoothly and run my 3D software viewports at great speeds. Plus, rendering in real time on some projects is great for fast development.\u201d \u2014 Daniel Barnes\n\nBarnes then took a screenshot, further sketched out his modeling edits and made lighting decisions in Adobe Photoshop.\n\nHis GeForce RTX 4090 GPU gives him access to over 30 GPU-accelerated features for quickly, smoothly modifying and adjusting images. These features include blur gallery, object selection and perspective warp.\n\nBack in Autodesk Maya, Barnes used the quad-draw tool \u2014 a streamlined, one-tool workflow for retopologizing meshes \u2014 to create geometry, adding break-in panels that would be advantageous for animating.\n\nBarnes used Chaos V-Ray with Autodesk Maya\u2019s Z-depth feature, which provides information about each object\u2019s distance from the camera in its current view. Each pixel representing the object is evaluated for distance individually \u2014 meaning different pixels for the same object can have varying grayscale values. This made it far easier for Barnes to tweak depth of field and add motion-blur effects.\n\nHe also added a combination of lights and applied materials with ease. Deploying RTX-accelerated ray tracing and AI denoising with the default Autodesk Arnold renderer enabled smooth movement in the viewport, resulting in beautifully photorealistic renders.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTgvZ2Vmb3JjZS1ydHgtNDA2MC10aS8=.pdf"}, {"question": "What technology did he use for faster rendering?", "gt_answer": "He used NVIDIA CUDA technology for faster rendering.", "gt_context": "He finished the project by compositing in Adobe After Effects, using GPU-accelerated features for faster rendering with NVIDIA CUDA technology.\n\nWhen asked what his favorite creative tools are, Barnes didn\u2019t hesitate. \u201cDefinitely my RTX cards and nice large displays!\u201d he said.\n\nCheck out Barnes\u2019 portfolio on Instagram .\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nGet started with NVIDIA Omniverse by downloading the standard license free , or learn how Omniverse Enterprise can connect your team . Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter , and follow NVIDIA Omniverse on Instagram , Medium and Twitter .\n\nFor more, join the Omniverse community and check out the Omniverse forums , Discord server , Twitch and YouTube channels.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/05/18/geforce-rtx-4060-ti/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTgvZ2Vmb3JjZS1ydHgtNDA2MC10aS8=.pdf"}, {"question": "Which companies are early supporters of NVIDIA Omniverse Cloud?", "gt_answer": "Early supporters of Omniverse Cloud include RIMAC Group, WPP, and Siemens.", "gt_context": "NVIDIA Launches Omniverse Cloud Services for Building and Operating Industrial Metaverse Applications\n\nCompany\u2019s First SaaS Offering Includes Omniverse Cloud Nucleus, DRIVE Sim, Isaac Sim, Replicator for Synthetic Data Generation; Initial Customers Include RIMAC Group, Siemens, WPP\n\nGTC -- NVIDIA today announced its first software- and infrastructure-as-a-service offering \u2014 NVIDIA Omniverse\u2122 Cloud \u2014 a comprehensive suite of cloud services for artists, developers and enterprise teams to design, publish, operate and experience metaverse applications anywhere.\n\nUsing Omniverse Cloud, individuals and teams can experience in one click the ability to design and collaborate on 3D workflows without the need for any local compute power. Roboticists can train, simulate, test and deploy AI-enabled intelligent machines with increased scalability and accessibility. Autonomous vehicle engineers can generate physically based sensor data and simulate traffic scenarios to test a variety of road and weather conditions for safe self-driving deployment.\n\nEarly supporters of Omniverse Cloud include RIMAC Group, WPP and Siemens.\n\n\u201cThe metaverse, the 3D internet, connects virtual 3D worlds described in USD and viewed through a simulation engine,\u201d said Jensen Huang, founder and CEO of NVIDIA. \u201cWith Omniverse in the cloud, we can connect teams worldwide to design, build, and operate virtual worlds and digital twins.\u201d\n\nGlobal Leaders Support Omniverse Cloud WPP, the world\u2019s largest marketing services organization, is the first to launch automotive marketing services on Omniverse Cloud to deliver custom, advanced 3D content and experiences to leading automotive brands.\n\n\u201cThe industry expectation for what great automotive content looks like, in any channel, has increased dramatically in the past few years,\u201d said Stephan Pretorius, chief technology officer at WPP. \u201cWith Omniverse Cloud, we are changing the way we build, share and consume automotive content \u2013 bringing sustainable, low-emission production to our customers.\u201d\n\nBuilding on the partnership announced earlier this year, Siemens, a leader in industrial automation hardware and software, is working closely with NVIDIA to leverage Omniverse Cloud and NVIDIA OVX\u2122 infrastructure together to deliver solutions from the Siemens Xcelerator business platform.\n\n\u201cAn open ecosystem is a central design principle for the Siemens Xcelerator digital business platform,\u201d said Tony Hemmelgarn, president and CEO of Siemens Digital Industries Software. \u201cWe are excited to expand our partnership with NVIDIA, develop integrations between Siemens Xcelerator and Omniverse Cloud, and enable an industrial metaverse where companies can remotely connect their organizations and operate in real time across the complete product and production lifecycle.\u201d\n\nRIMAC, a pioneer in electric vehicle technologies, is using Omniverse Cloud to provide an end-to-end automotive pipeline \u2014 from design to marketing.", "document": "T21uaXZlcnNlIENsb3VkIFNlcnZpY2VzIDkvMjAvMjIucGRm.pdf"}, {"question": "What are the key benefits of using electric motors in car design?", "gt_answer": "Electric motors are efficient, flexible, and can adjust quickly, allowing engineers to create cars that handle in ways combustion engine cars cannot.", "gt_context": "\u201cElectric motors are efficient and can adjust in an instant. Their flexibility allows engineers to create a car that can handle in a way a combustion engine car never could,\u201d said Mate Rimac, founder and CEO of RIMAC. \u201cOmniverse Cloud will provide similar efficiency and flexibility, enabling our engineering teams to focus on the design of the car model itself, and spend less time on the intricacies of complex 3D design pipelines. And with this 3D car configurator experience, it unlocks endless possibilities for customization without having to manually render each layer, which saves time and money.\u201d\n\nIn the GTC keynote, Huang showcased an Omniverse Cloud demo featuring an advanced, real-time 3D car configurator of the RIMAC Nevera, the recently launched electric hypercar from BUGATTI RIMAC, part of the RIMAC Group.\n\nOmniverse Cloud services run on the Omniverse Cloud Computer, a computing system comprised of NVIDIA OVX\u2122 for graphics and physics simulation, NVIDIA HGX\u2122 for advanced AI workloads and the NVIDIA Graphics Delivery Network (GDN), a global-scale distributed data center network for delivering high-performance, low-latency metaverse graphics at the edge.\n\nOmniverse Cloud services include:\n\nOmniverse Nucleus Cloud \u2014 provides 3D designers and teams the freedom to collaborate and access a shared Universal Scene Description (USD)-based 3D scene and data. Nucleus Cloud enables any designer, creator or developer to save changes, share, make live edits and view changes in a scene from nearly anywhere.\n\nOmniverse App Streaming \u2014 enables users without NVIDIA RTX\u2122 GPUs to stream Omniverse reference applications like Omniverse Create, an app for designers and creators to build USD-based virtual worlds, Omniverse View, an app for reviews and approvals, and NVIDIA Isaac Sim, for training and testing robots. Omniverse Replicator \u2014 enables researchers, developers and enterprises to generate physically accurate 3D synthetic data, and easily build custom synthetic-data generation tools to accelerate the training and accuracy of perception networks and easily integrate with NVIDIA AI cloud services. Omniverse Farm \u2014 enables users and enterprises to harness multiple cloud compute instances to scale out Omniverse tasks such as rendering and synthetic data generation. NVIDIA Isaac Sim \u2014 a scalable robotics simulation application and synthetic-data generation tool that powers photorealistic, physically accurate virtual environments to develop, test and manage AI-based robots. NVIDIA DRIVE Sim\u2122 \u2014 an end-to-end simulation platform to run large-scale, physically accurate multisensor simulations to support autonomous vehicle development and validation from concept to deployment, improving developer productivity and accelerating time to market.", "document": "T21uaXZlcnNlIENsb3VkIFNlcnZpY2VzIDkvMjAvMjIucGRm.pdf"}, {"question": "What containers are available on NVIDIA NGC for self-service deployment on AWS using Amazon EC2 G5 instances?", "gt_answer": "Omniverse Farm, Replicator, and Isaac Sim containers are available on NVIDIA NGC for self-service deployment on AWS using Amazon EC2 G5 instances.", "gt_context": "Availability Omniverse Farm, Replicator and Isaac Sim containers are available today on NVIDIA NGC\u2122 for self-service deployment on AWS using Amazon EC2 G5 instances featuring NVIDIA A10G Tensor Core GPUs. In addition, Omniverse Cloud will be available as NVIDIA managed services via early access by application.\n\nTo learn more about NVIDIA Omniverse Cloud, watch Huang\u2019s GTC keynote. Register free for GTC to attend sessions with NVIDIA and industry leaders.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics and ignited the era of modern AI. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, and features of NVIDIA Omniverse Cloud services, the Omniverse Cloud Computer, NVIDIA OVX, NVIDIA HGX and the NVIDIA Graphics Delivery Network; the company connecting teams worldwide to design, build, and operate virtual worlds and digital twins with Omniverse in the cloud; and the availability of Omniverse Cloud services are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward- looking statements to reflect future events or circumstances.", "document": "T21uaXZlcnNlIENsb3VkIFNlcnZpY2VzIDkvMjAvMjIucGRm.pdf"}, {"question": "What is the performance and power efficiency of the new GeForce RTX 40 Series Laptop GPUs?", "gt_answer": "The performance and power efficiency of the new GeForce RTX 40 Series Laptop GPUs enable the greatest ever generational leap.", "gt_context": "NVIDIA Reveals Gaming, Creator, Robotics, Auto Innovations at CES New GeForce RTX GPUs, hyper-efficient laptops, new Omniverse capabilities among highlights of CES special address.\n\nAuthor: Brian Caulfield\n\nPowerful new GeForce RTX GPUs, a new generation of hyper-efficient laptops and new Omniverse capabilities and partnerships across the automotive industry were highlights of a news-packed address ahead of this week\u2019s CES trade show in Las Vegas.\n\n\u201cAI will define the future of computing and this has influenced much of what we\u2019re covering today,\u201d said Jeff Fisher, senior vice president for gaming products at NVIDIA, as he kicked off the presentation.\n\nFisher was joined by several leaders from NVIDIA to introduce products and partnerships across gaming and content creation, robotics and next-generation automobiles.\n\nThe headline news:\n\nGeForce RTX 40 Series laptops deliver company\u2019s largest-ever generational leap in performance and power efficiency to 170+ laptops for gamers and designers.\n\nGeForce RTX 40 Series Studio laptops will bring new power and efficiency to creators, gamers and designers .\n\nNVIDIA launches GeForce RTX 4070 Ti graphics cards, faster than RTX 3090 Ti, bringing power and efficiency of NVIDIA Ada architecture to $799.\n\nDLSS 3 comes to 50 released and upcoming games .\n\nNVIDIA bringing RTX 4080 performance to GeForce NOW cloud-gaming service .\n\nHyundai Motor Group, BYD and Polestar adopt GeForce NOW for cars .\n\nFoxconn partners with NVIDIA to build automated electric vehicles and manufacture NVIDIA DRIVE Orin computers for the global automotive market .\n\nMercedes-Benz to use NVIDIA Omniverse to assemble next-generation factories in \u201cdigital-first\u201d approach.\n\nMajor updates to NVIDIA Omniverse Enterprise enhance performance and offer new deployment options.\n\nNVIDIA opens Omniverse portals with generative AIs for 3D and RTX Remix; releases AI avatar builder Omniverse ACE in early access .\n\nNVIDIA Isaac Sim gets next-gen simulation tools for robotics development .\n\nFisher said the performance and power efficiency of the NVIDIA GeForce RTX 40 Series Laptop GPUs enable the greatest ever generational leap, including 14-inch gaming and creating powerhouse laptops, starting at $999 in February.\n\nNew GeForce RTX 4070 Ti graphics cards for desktops are faster than last generation\u2019s RTX 3090 Ti at nearly half the power, bringing the NVIDIA Ada Lovelace architecture down to $799, with availability starting Jan. 5.\n\nAnd DLSS 3 is being adopted by developers faster than any prior NVIDIA tech, with 50 released and upcoming titles, including Witchfire , The Day Before , Warhaven , THRONE AND LIBERTY and Atomic Heart .\n\nIn addition, RTX 4080 performance is coming to the NVIDIA GeForce NOW cloud-gaming service. As a result, Fisher said millions more gamers will have access to the NVIDIA Ada architecture with GeForce NOW\u2019s Ultimate membership.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDMvZ2FtaW5nLWNyZWF0b3Itcm9ib3RpY3MtYXV0by1jZXMv.pdf"}, {"question": "What updates were announced for the next release of Isaac Sim?", "gt_answer": "The updates for the next release of Isaac Sim include improved sensor and lidar support, a new conveyor-building tool, a utility to add people to the simulation environment, new sim-ready warehouse assets, and pre-integrated popular robots.", "gt_context": "The new tier will bring NVIDIA Reflex and 240 frames per second streaming to the cloud for the first time, along with full ray tracing and DLSS 3 in games like Portal With RTX .\n\nMomentum for NVIDIA RTX continues to build, Fisher said. \u201cCreating has grown beyond photos and videos to virtual worlds rendered with 3D cinematic graphics and true-to-life physics,\u201d Fisher said. \u201cThe RTX platform is powering this growth.\u201d\n\nRay tracing and AI are defining the next generation of content, and NVIDIA Studio is the platform for this new breed of content creators. The heartbeat of Studio is found in NVIDIA Omniverse , where creators can connect accelerated apps and collaborate in real time.\n\nBuilt with NVIDIA RTX, Omniverse is a platform enabling 3D artists to connect their favorite tools from Adobe, Autodesk, SideFX, Unreal Engine and more. And Omniverse now has a new Connector for Unity, said Stephanie Johnson, vice president of consumer marketing at NVIDIA.\n\nJohnson introduced a suite of new generative AI tools and experimental plug-ins using the power of AI as the ultimate creative assistant. Audio2Face and Audio2Gesture generate animations from an audio file. The AI ToyBox by NVIDIA Research lets users generate 3D meshes from 2D inputs.\n\nCompanies have used generative AI technology to build Omniverse Connectors and extensions. Move.AI \u2019s Omniverse extension, for example, enables video-to-animation. Lumirithmic generates 3D mesh for heads from facial scans. And Elevate3D generates photorealistic 3D visualizations of products from 360-degree video recordings.\n\nJohnson also announced that NVIDIA RTX Remix , which is built on Omniverse and is \u201cthe easiest way to mod classic games,\u201d will be entering early access soon. \u201cThe modding community can\u2019t wait to get their hands on Remix,\u201d she said.\n\nSimulation plays a vital role in the lifecycle of a robotics project, explained Deepu Talla, vice president of embedded and edge computing at NVIDIA. Partners are using NVIDIA Isaac Sim to create digital twins that help speed the training and deployment of intelligent robots.\n\nTo revolutionize the way the robotics ecosystem develops the next generation of autonomous robots, Talla announced major updates to the next release of Isaac Sim. This includes improved sensor and lidar support to more accurately model real-world performance, a new conveyor-building tool, a new utility to add people to the simulation environment, a collection of new sim-ready warehouse assets and a host of new popular robots that come pre-integrated.\n\nFor the open-source ROS developer community, this release upgrades support for ROS 2 Humble and Windows, Talla added. And for robotics researchers, NVIDIA is introducing a new tool called Isaac ORBIT , which provides operating environments for manipulator robots. NVIDIA has also improved Isaac Gym for reinforcement learning and updated Isaac Cortex for collaborative robot programming.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDMvZ2FtaW5nLWNyZWF0b3Itcm9ib3RpY3MtYXV0by1jZXMv.pdf"}, {"question": "Which companies are partnering with NVIDIA to develop software on NVIDIA DRIVE?", "gt_answer": "Hundreds of partners across the automotive ecosystem, including 20 of the top 30 manufacturers building new energy vehicles, many of the industry\u2019s top tier one manufacturers and software makers, and eight of the largest 10 trucking and robotaxi companies.", "gt_context": "\u201cWe are committed to advancing robotics and arguably investing more than anyone else in the world,\u201d Talla said. \u201cWe are well on the way to having a thousand to million times more virtual robots for every physical robot deployed.\u201d\n\nThe NVIDIA DRIVE platform is open and easy to program, said Ali Kani, vice president of automotive at NVIDIA.\n\nHundreds of partners across the automotive ecosystem are now developing software on NVIDIA DRIVE, including 20 of the top 30 manufacturers building new energy vehicles, many of the industry\u2019s\n\ntop tier one manufacturers and software makers, plus eight of the largest 10 trucking and robotaxi companies.\n\nIt\u2019s a number that continues to grow, with Kani announcing a partnership with Foxconn , the world\u2019s largest technology manufacturer and service provider, to build electric vehicles based on NVIDIA DRIVE Hyperion .\n\n\u201cWith Hyperion adoption, Foxconn will manufacture vehicles with leading electric range as well as state-of-the-art AV technology while reducing time to market,\u201d Kani said.\n\nKani touched on how, as next-generation cars become autonomous and electric, interiors are transformed into mobile living spaces, complete with the same entertainment available at home. GeForce NOW will be \u201ccoming to screens in your car,\u201d Kani said.\n\nKani also announced several DRIVE partners are integrating GeForce NOW , including Hyundai Motor Group, BYD and Polestar.\n\nWhile gamers will enjoy virtual worlds from inside their cars, tools such as the metaverse are critical to the development and testing of new autonomous vehicles.\n\nKani announced that Mercedes-Benz is using digital twin technology to plan and build more efficient production facilities. \u201cThe applications for Omniverse in the automotive market are staggering,\u201d Kani said.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/03/gaming-creator-robotics-auto-ces/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDMvZ2FtaW5nLWNyZWF0b3Itcm9ib3RpY3MtYXV0by1jZXMv.pdf"}, {"question": "What is NVIDIA DRIVE Sim?", "gt_answer": "NVIDIA DRIVE Sim is a platform that allows automakers to design vehicle interiors and retail experiences entirely in the virtual world.", "gt_context": "Intelligent Design: NVIDIA DRIVE Revolutionizes Vehicle Interior Experiences\n\nNVIDIA DRIVE IX and DRIVE Sim now enable end-to-end digital twins and workflows, from vehicle design to retail.\n\nAuthor: Dan Berman\n\nAI is extending further into the vehicle as autonomous-driving technology becomes more prevalent.\n\nWith the NVIDIA DRIVE platform, automakers can design and implement intelligent interior features to continuously surprise and delight customers.\n\nIt all begins with the compute architecture. The recently introduced NVIDIA DRIVE Thor platform unifies traditionally distributed functions in vehicles \u2014 including digital cluster, infotainment, parking and assisted driving \u2014 for greater efficiency in development and faster software iteration.\n\nNVIDIA DRIVE Concierge , built on the DRIVE IX software stack, runs an array of safety and convenience features, including driver and occupant monitoring, digital assistants and autonomous-vehicle visualization.\n\nAutomakers can benefit from NVIDIA data center solutions even if they aren\u2019t using the NVIDIA DRIVE platform. With cloud technology, vehicles can stream the NVIDIA GeForce NOW cloud-gaming service without any special equipment. Plus, developers can train, test and validate in-vehicle AI models on NVIDIA DGX servers.\n\nThe same data center technology that\u2019s accelerating AI development \u2014 in combination with the NVIDIA Omniverse platform for creating and operating metaverse applications \u2014 is also revolutionizing the automotive product cycle. Using NVIDIA DRIVE Sim built on Omniverse, automakers can design vehicle interiors and retail experiences entirely in the virtual world.\n\nDesigning and selling vehicles requires the highest levels of organization and orchestration. The cockpit alone has dozens of components \u2014 such as steering wheel, cluster and infotainment \u2014 that developers must create and integrate with the rest of the car.\n\nThese processes are incredibly time- and resource-intensive \u2014 there are countless configurations, and chosen designs must be built out and tested prior to production. Vehicle designers must collaborate on various layouts, which must then be validated and approved. Customers must travel to dealerships to experience various options, and the ability to test features depends on a store\u2019s inventory at any given time.\n\nIn the virtual world, developers can easily design vehicles, and car buyers can seamlessly test them, leading to an optimal experience on both ends of the production pipeline.\n\nAutomakers operate design centers around the world, tapping into expertise from North America, Europe, Asia and other automotive hubs. Working on user experience concepts across these locations requires frequent international travel and close coordination.\n\nWith DRIVE Sim, designers and engineers anywhere in the world can work together to develop the cockpit experience, without having to leave their desks.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDMvZHJpdmUtc2ltLWl4LXZlaGljbGUtaW50ZXJpb3ItZXhwZXJpZW5jZXMv.pdf"}, {"question": "What are the benefits of using DRIVE Sim in the car design process?", "gt_answer": "The benefits of using DRIVE Sim in the car design process include saving time and resources by testing concepts in the virtual world, reviewing designs before production, and ensuring they meet safety standards.", "gt_context": "Design teams can also save time and valuable resources by testing concepts in the virtual world, without having to wait for physical prototypes. Decision-makers can review designs and ensure they meet relevant safety standards in DRIVE Sim before sending them to production.\n\nThe benefits of in-vehicle simulation extend far beyond the design phase.\n\nConsumers are increasingly expecting full-service digital retail experiences. More than 60% of shoppers want to conduct more of the car-buying process online compared to the last time they bought a vehicle, while more than 75% are open to buying a car entirely online, according to an Autotrader survey .\n\nThe same tools used to design the vehicle can help meet these rising consumer expectations.\n\nWith DRIVE Sim, car buyers can configure and test the car from the comfort of their homes. Customers can see all potential options and combinations of vehicle features at the push of a button and take their dream car for a virtual spin \u2014 no lengthy trips to the dealership required.\n\nFrom concept design to customer experience, DRIVE Sim is easing the process and opening up new ways to design and enjoy intelligent vehicles.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/03/drive-sim-ix-vehicle-interior-experiences/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDMvZHJpdmUtc2ltLWl4LXZlaGljbGUtaW50ZXJpb3ItZXhwZXJpZW5jZXMv.pdf"}, {"question": "Who developed GatorTron, a neural network for healthcare research?", "gt_answer": "UF Health, the university's academic health center, teamed with NVIDIA to develop GatorTron.", "gt_context": "UF Provost Joe Glover on Building a Leading AI University\n\nAuthor: Brian Caulfield\n\nWhen NVIDIA co-founder Chris Malachowsky approached University of Florida Provost Joe Glover with the offer of an AI supercomputer, he couldn\u2019t have predicted the transformative impact it would have on the university. In just a short time, UF has become one of the top public colleges in the U.S. and developed a groundbreaking neural network for healthcare research.\n\nIn a recent episode of NVIDIA\u2019s AI Podcast , host Noah Kravitz sat down with Glover, who is also senior vice president of academic affairs at UF. The two discussed the university\u2019s efforts to put AI to work across all aspects of higher education, including a public-private partnership with NVIDIA that has helped transform UF into one of the leading AI universities in the country.\n\nJust a year after the partnership was unveiled in July 2020, UF rose to No. 5 on the U.S. News and World Report\u2019s list of the best public colleges in the U.S. The ranking was, in part, a recognition of UF\u2019s vision for infusing AI into its teaching and research.\n\nLast March, UF Health, the university\u2019s academic health center, teamed with NVIDIA to develop GatorTron, a neural network that generates synthetic clinical data researchers can use to train other AI models in healthcare.\n\nAccording to Glover, the success of UF\u2019s AI initiatives can be attributed to \u201ca combination of generous philanthropy, some good decisions, a little inspiration and a few miracles here and there along the way.\u201d\n\nHe believes that the university\u2019s AI-powered vision has significantly impacted its teaching and research and will continue to do so in the future.\n\nArt(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint\n\nPindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.\n\nReal or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art\n\nSteven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He\u2019s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci\u2019s Salvador Mundi , with AI\u2019s help.\n\nGANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments\n\nHumans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V .\n\nYou can now listen to the AI Podcast through Amazon Music .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDQvdW5pdmVyc2l0eS1vZi1mbG9yaWRhLWFpLw==.pdf"}, {"question": "Where can I listen to the AI Podcast?", "gt_answer": "You can listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher, and TuneIn.", "gt_context": "You can now listen to the AI Podcast through Amazon Music .\n\nAlso get the AI Podcast through Apple Music , Google Podcasts , Google Play , Castbox , DoggCatcher, Overcast , PlayerFM , Pocket Casts, Podbay , PodBean , PodCruncher, PodKicker, Soundcloud , Spotify , Stitcher and TuneIn .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/04/university-of-florida-ai/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMDQvdW5pdmVyc2l0eS1vZi1mbG9yaWRhLWFpLw==.pdf"}, {"question": "What resources do developers use to deliver PC cloud gaming experiences?", "gt_answer": "Developers use NVIDIA resources, such as APIs and SDKs, to deliver PC cloud gaming experiences.", "gt_context": "GFN Thursday Celebrates 1,500+ Games and Their Journey to GeForce NOW GeForce NOW provides game developers with NVIDIA cloud APIs and SDKs for zero port work; plus, new RTX 4080 SuperPOD now online, five new games available to stream and a marvel-ous new reward.\n\nAuthor: GeForce NOW Community\n\nGamers love games \u2014 as do the people who make them.\n\nGeForce NOW streams over 1,500 games from the cloud, and with the Game Developers Conference in full swing this week, today\u2019s GFN Thursday celebrates all things games: the tech behind them, the tools that bring them to the cloud, the ways to play them and the new ones being added to the library this week.\n\nDevelopers use a host of NVIDIA resources to deliver the best in PC cloud gaming experiences. CD PROJEKT RED, one of many developers to tap into these resources, recently announced a new update coming to Cyberpunk 2077 on April 11 \u2014 including a new technology preview for Ray Tracing: Overdrive Mode that enables full ray tracing on GeForce RTX 40 Series GPUs and RTX 4080 SuperPODs.\n\nIn addition, members in and around Sofia, Bulgaria, can now experience the best of GeForce NOW Ultimate cloud gaming. It\u2019s the latest city to roll out RTX 4080 gaming rigs to GeForce NOW servers around the globe.\n\nPlus, with five new games joining the cloud this week, and an upcoming marvel-ous reward, GeForce NOW members can look forward to a busy weekend of streaming goodness.\n\nGDC presents the ideal time to spotlight GeForce NOW tools that enable developers to seamlessly bring their games to the cloud. NVIDIA tools, software development kits (SDKs) and partner engines together enable the production of stunning real-time content that uses AI and ray tracing. And bringing these games to billions of non-PC devices is as simple as checking an opt-in box.\n\nGeForce NOW taps into existing game stores, allowing game developers to reap the benefits of a rapidly growing audience without the hassle of developing for another platform. This means zero port work to bring games to the cloud. Users don\u2019t have to buy games for another platform and can play them on many of the devices they already own.\n\nDevelopers who want to do more have access to the GeForce NOW Developer Platform \u2014 an SDK and toolset empowering integration of, interaction with and testing on the NVIDIA cloud gaming service. It allows developers to enhance their games to run more seamlessly, add cloud gaming into their stores and launchers, and let users connect their accounts and libraries to GeForce NOW.\n\nThe SDK is a set of APIs, runtimes, samples and documentation that allows games to query for cloud execution and enable virtual touchscreens; launchers to trigger cloud streaming of a specified game; and GeForce NOW and publisher backends to facilitate account linking and game library ownership syncing, already available for Steam and Ubisoft games.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjMvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMjMv.pdf"}, {"question": "What company will be using NVIDIA cloud gaming infrastructure for an interactive experience?", "gt_answer": "Improbable", "gt_context": "Content developers have a slew of opportunities to bring their virtual worlds and interactive experiences to users in unique ways, powered by the cloud.\n\nMetaverse services company Improbable will use NVIDIA cloud gaming infrastructure for an interactive, live, invite-only experience that will accommodate up to 10,000 guests. Other recent developer events included the DAF Trucks virtual experience , where potential customers took the newest DAF truck for\n\na test drive in a simulated world, with PixelMob\u2019s Euro Truck Simulator 2 providing the virtual playground.\n\nFurthermore, CD PROJEKT RED will be delivering full ray tracing, aka path tracing , to Cyberpunk 2077. Such effects were previously only possible for film and TV. With the power of a GeForce RTX 4080 gaming rig in the cloud, Ultimate members will be able to stream the new technology preview for the Ray Tracing: Overdrive Mode coming to Cyberpunk 2077 across devices \u2014 even Macs \u2014 no matter the game\u2019s system requirements.\n\nGeForce NOW Ultimate members have been enjoying Marvel\u2019s Midnight Suns \u2019 ultra-smooth, cinematic game play thanks to DLSS 3 technology support on top of RTX-powered ray tracing, which together enable graphics breakthroughs.\n\nNow, members can fight among the legends with Captain Marvel\u2019s Medieval Marvel suit in a free reward, which will become available at the end of the month \u2014 first to Premium members who are opted into GeForce NOW rewards. This reward is only available until May 6, so upgrade to an Ultimate or Priority membership today and opt into rewards to get first access.\n\nNext, on to the five new games hitting GeForce NOW week for a happy weekend:\n\nTchia (New release on Epic Games Store )\n\nChess Ultra (New release on Epic Games Store , March 23)\n\nAmberial Dreams ( Steam )\n\nSymphony of War: The Nephilim Saga ( Steam )\n\nNo One Survived ( Steam )\n\nAnd with that, we\u2019ve got a question to end this GFN Thursday:\n\nYou've got free rent for a year to live in a video game city of your choice, which one are you choosing? n\n\n\u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) March 22, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/23/geforce-now-thursday-march-23/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMjMvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMjMv.pdf"}, {"question": "What is Activ Surgical using NVIDIA Clara Holoscan for?", "gt_answer": "Activ Surgical has selected NVIDIA Clara Holoscan to accelerate development of its AI and augmented-reality solution for real-time surgical guidance.", "gt_context": "NVIDIA Medical Edge AI Computing Platform Selected by Top Robotic and Digital Surgery Startups\n\nActiv Surgical, Moon Surgical and Proximie will bring real-time AI to their surgery platforms using NVIDIA Clara Holoscan on NVIDIA IGX.\n\nAuthor: Raghav Mani\n\nNVIDIA today introduced the NVIDIA IGX platform for medical edge AI use cases, bringing advanced security and safety to intelligent machines and human-machine collaboration.\n\nIGX is a hardware and software platform that delivers secure, low-latency AI inference to meet the clinical demand for instant insights from a range of devices and sensors for medical applications, including robotic-assisted surgery and patient monitoring.\n\nThe IGX platform supports NVIDIA Clara Holoscan , a domain-specific platform that allows medical-device developers to bridge edge, on-premises data center and cloud services. This integration enables the rapid development of new, software-defined devices that bring the latest AI applications directly into the operating room.\n\nThree leading medical-device startups \u2014 Activ Surgical, Moon Surgical and Proximie \u2014 have selected the combination of NVIDIA Clara Holoscan running on the IGX platform to power their surgical robotics systems. All three are members of NVIDIA Inception , a global program that helps technology startups evolve faster.\n\nThey\u2019re among more than 70 medical device companies, medical centers and startups already using Clara Holoscan to advance their efforts to deploy AI computing in clinical settings.\n\nActiv Surgical has selected NVIDIA Clara Holoscan to accelerate development of its AI and augmented-reality solution for real-time surgical guidance. The Boston-based company\u2019s ActivSight technology allows surgeons to view critical physiological structures and functions, like blood flow, that cannot be seen with the naked eye.\n\nBy integrating this information into surgical imaging systems, the company aims to reduce surgical complication rates, improving patient care and safety.\n\n\u201cNVIDIA Clara Holoscan will help us optimize precious engineering resources and go to market faster,\u201d says Tom Calef, chief technology officer at Activ Surgical. \u201cWith Clara Holoscan and NVIDIA IGX, we envision that our intraoperative AI solution will transform the collective surgical experience with data-driven insights, helping make world-class surgery accessible for all.\u201d\n\nParis-based robotic surgery company Moon Surgical is designing Maestro, an accessible, adaptive surgical-assistant robotics system that works with the equipment and workflows that operating rooms already have in place.\n\n\u201cNVIDIA has all the hardware and software figured out, with an optimized architecture and libraries,\u201d said Anne Osdoit, CEO of Moon Surgical. \u201cClara Holoscan helps us not worry about things we typically spend a lot of time working on in the medical-device development cycle.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvaWd4LWNsYXJhLWhvbG9zY2FuLWVkZ2UtYWktcm9ib3RpYy1zdXJnZXJ5Lw==.pdf"}, {"question": "What is the purpose of Clara Holoscan for Proximie?", "gt_answer": "Clara Holoscan allows Proximie to provide local video processing in the operating room, improving performance for users while maintaining data privacy and lowering cloud-computing costs.", "gt_context": "The company has instead been able to focus its engineering resources on AI algorithms and other unique features. Adopting Clara Holoscan saved them time and resources, helping them compress their development timeline.\n\nLondon-based Proximie is building a telepresence platform to enable real-time, remote surgeon collaboration. Clara Holoscan will allow the company to provide local video processing in the operating room, improving performance for users while maintaining data privacy and lowering cloud-computing costs.\n\n\u201cWe are delighted to work with NVIDIA to strengthen the health ecosystem and further our mission to connect operating rooms globally,\u201d said Dr. Nadine Hachach-Haram, founder and CEO of Proximie. \u201cThanks to this collaboration, we are able to provide the most immersive experience possible and deliver a resilient digital solution, with which operating-room devices all over the world can communicate with each other and capture valuable insights.\u201d\n\nProximie is already deployed in more than 500 operating rooms around the world, and has recorded tens of thousands of surgical procedures to date.\n\nThe NVIDIA IGX platform is powered by NVIDIA IGX Orin, the world\u2019s most powerful, compact and energy-efficient AI supercomputer for medical devices. IGX Orin developer kits will be available early next year.\n\nIGX features industrial-grade components designed for medical certification, making it easier to take medical devices from clinical trials to real-world deployment.\n\nEmbedded-computing manufacturers ADLINK, Advantech , Dedicated Computing , Kontron, Leadtek, MBX, Onyx, Portwell , Prodrive Technologies and YUAN will be among the first to build products based on NVIDIA IGX for the medical device industry.\n\nLearn more about the NVIDIA IGX platform in a special address by Kimberly Powell , NVIDIA\u2019s vice president of healthcare, at GTC . Register free for the virtual conference, which runs through Thursday, Sept. 22.\n\nHear from Activ Surgical and other leading startups in medical devices, medical imaging and biopharma in the GTC panel, \u201c Accelerate Patient-Centric Innovation With Makers and Breakers in Healthcare Life Science .\u201d The GTC session \u201c Take Medical AI from Research to Clinical Production With MONAI and Clara Holoscan \u201d will highlight the latest developments in MONAI and Clara Holoscan.\n\nWatch the GTC keynote address by NVIDIA founder and CEO Jensen Huang below:\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/20/igx-clara-holoscan-edge-ai-robotic-surgery/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvaWd4LWNsYXJhLWhvbG9zY2FuLWVkZ2UtYWktcm9ib3RpYy1zdXJnZXJ5Lw==.pdf"}, {"question": "What powers Gl\u00fcxkind's Ella smart stroller?", "gt_answer": "NVIDIA Jetson edge AI platform powers Gl\u00fcxkind's Ella smart stroller.", "gt_context": "Roll Model: Smart Stroller Pushes Its Way to the Top at CES 2023\n\nStartup Gl\u00fcxkind\u2019s Ella smart stroller brings parents advanced safety features powered by NVIDIA Jetson edge AI platform.\n\nAuthor: Brian Caulfield\n\nAs any new mom or dad can tell you, parenting can be a challenge \u2014 packed with big worries and small hassles. But it may be about to get a little bit easier thanks to Gl\u00fcxkind Technologies and their smart stroller, Ella.\n\nThe company has just been named a CES 2023 Innovation Awards Honoree for their AI-powered stroller, which was designed to make life easier for new parents and caregivers.\n\n\u201cPeople love the rock-a-baby feature and the push and brake assist,\u201d said Gl\u00fcxkind co-founder and CEO Kevin Huang of a product that\u2019s become an instant sensation at the annual technology industry confab. \u201cWhen you\u2019re able to hold your child and have the stroller take care of itself, that\u2019s a pretty magical moment.\u201d\n\nThe story behind the product that\u2019s made headlines around the world began three years ago when Huang and his co-founder, Anne Hunger, had a baby daughter and went stroller shopping.\n\nAnd, like all parents, they learned about the challenges of wrangling a stroller packed with baby gear and the safety concerns that have new parents shopping for the safest vehicles they can afford.\n\n\u201cI realized, \u2018man, this stuff hasn\u2019t changed in the last 30 years,\u2019\u201d Huang said.\n\nModern cars, for example, are equipped with systems that ensure they don\u2019t roll backward when you\u2019re stopped on a hill, Huang explained.\n\n\u201cSo I thought maybe we can add some of the things already there for cars into this platform that actually carries our children, so we can have a safer and more convenient experience.\u201d\n\nThe response from parents at CES was overwhelmingly positive. No surprise, given the in-depth research Huang and his team conducted with new parents.\n\nBut it\u2019s also wowed tech enthusiasts worldwide, earning honors from the awards program produced by the Consumer Technology Association , the trade group behind the annual Las Vegas conference.\n\n\u201cWe came to CES with the idea of announcing the product and getting maybe three to five writeups about what we were doing,\u201d Huang said. \u201cWe didn\u2019t expect the overwhelming amount of exposure we received.\u201d\n\nThis year\u2019s CES Innovation Awards program \u2014 overseen by an elite panel of judges, including media members, designers and engineers \u2014 received a record-high number of over 2,100 submissions, making it no small feat for Ella to come out on top.\n\nHuang reports that NVIDIA\u2019s Jetson edge AI platform powers the startup\u2019s entire AI stack.\n\nGl\u00fcxkind, based in Vancouver, Canada, is a member of NVIDIA Inception , a free program designed to help startups evolve faster through access to cutting-edge technology and NVIDIA experts, opportunities to connect with venture capitalists, and co-marketing support to heighten the company\u2019s visibility.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTgvZWxsYS1zdHJvbGxlci1qZXRzb24v.pdf"}, {"question": "What kind of technology does Ella's stroller use to navigate its surroundings?", "gt_answer": "Ella's stroller uses computer vision with Jetson's GPU and CPU to map its surroundings and do pathfinding.", "gt_context": "With Jetson, Huang explains that the stroller is able to use computer vision to map the stroller\u2019s surroundings, using Jetson\u2019s GPU and CPU to process and do pathfinding.\n\nAs a result, when the child isn\u2019t in the stroller, parents can activate Ella\u2019s intelligent hands-free strolling mode.\n\nThis advanced parent-assist technology helps parents focus on their kids rather than wrangling an empty stroller packed with diapers, snacks, and other supplies.\n\n\u201cIt stays out of the way when you don\u2019t need it, but it\u2019s there when you do need it,\u201d Huang said.\n\nBut while the stroller is intelligent \u2014 able to follow a caregiver as they hold a baby or help ensure the stroller doesn\u2019t roll away on its own \u2014 it\u2019s not designed to work independently.\n\nQuite the opposite. With Ella\u2019s adaptive push and brake assistance, caregivers can enjoy effortless walks no matter the terrain \u2014 uphill, downhill or even when fully loaded with groceries and toys.\n\nElla also has features that make parenting easier, such as Rock-My-Baby mode to help little ones get the sleep they need and built-in white noise playback.\n\n\u201cWe\u2019re trying to make it so the technology we\u2019re building is augmentative to the parents\u2019 experience to make parenting easier and safer,\u201d Huang said.\n\nThe result: while parenting will never be a walk in the park, actually taking that newborn for an actual walk in the park will soon be a lot less of a hassle.\n\nImage credit: Gl\u00fcxkind Technologies\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/18/ella-stroller-jetson/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMTgvZWxsYS1zdHJvbGxlci1qZXRzb24v.pdf"}, {"question": "Which companies have adopted the Jetson Orin family?", "gt_answer": "Canon, John Deere, Microsoft Azure, Teradyne, TK Elevator", "gt_context": "NVIDIA Jetson Orin Nano Sets New Standard for Entry- Level Edge AI and Robotics With 80x Performance Leap\n\nCanon, John Deere, Microsoft Azure, Teradyne, TK Elevator Join Over 1,000 Customers Adopting Jetson Orin Family Within Six Months of Launch\n\nGTC -- NVIDIA today expanded the NVIDIA\u00ae Jetson\u2122 lineup with the launch of new Jetson Orin Nano\u2122 system-on- modules that deliver up to 80x the performance over the prior generation, setting a new standard for entry-level edge AI and robotics.\n\nFor the first time, the NVIDIA Jetson family spans six Orin-based production modules to support a full range of edge AI and robotics applications. This includes the Orin Nano \u2014 which delivers up to 40 trillion operations per second (TOPS) of AI performance in the smallest Jetson form factor \u2014 up to the AGX Orin\u2122, delivering 275 TOPS for advanced autonomous machines.\n\nJetson Orin features an NVIDIA Ampere architecture GPU, Arm-based CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, fast memory bandwidth and multimodal sensor support. This performance and versatility empower more customers to commercialize products that once seemed impossible, from engineers deploying edge AI applications to Robotics Operating System (ROS) developers building next-generation intelligent machines.\n\n\u201cOver 1,000 customers and 150 partners have embraced Jetson AGX Orin since NVIDIA announced its availability just six months ago, and Orin Nano will significantly expand this adoption,\u201d said Deepu Talla, vice president of embedded and edge computing at NVIDIA. \u201cWith an orders-of-magnitude increase in performance for millions of edge AI and ROS developers today, Jetson Orin is the ideal platform for virtually every kind of robotics deployment imaginable.\u201d\n\nMaking Edge AI and Robotics More Accessible The Orin Nano modules are form-factor- and pin-compatible with the previously announced Orin NX modules. Full emulation support allows customers to get started developing for the Orin Nano series today using the AGX Orin developer kit. This gives customers the flexibility to design one system to support multiple Jetson modules and easily scale their applications.\n\nOrin Nano supports multiple concurrent AI application pipelines with high-speed I/O and an NVIDIA Ampere architecture GPU. Developers of entry-level devices and applications such as retail analytics and industrial quality control benefit from easier access to more complex AI models at lower cost.\n\nThe Orin Nano modules will be available in two versions. The Orin Nano 8GB delivers up to 40 TOPS with power configurable from 7W to 15W, while the 4GB version delivers up to 20 TOPS with power options as low as 5W to 10W.", "document": "SmV0c29uIE9yaW4gTmFubyA5LzIwLzIyLnBkZg==.pdf"}, {"question": "When will the Jetson Orin Nano modules be available?", "gt_answer": "The Jetson Orin Nano modules will be available in January, starting at $199.", "gt_context": "The Jetson Orin platform is designed to solve the toughest robotics challenges and brings accelerated computing to over 700,000 ROS developers. Combined with the powerful hardware capabilities of Orin Nano, enhancements in the latest NVIDIA Isaac\u2122 software for ROS put increased performance and productivity in the hands of roboticists.\n\nStrong Ecosystem and Software Support Jetson Orin has seen broad support across the robotics and embedded computing ecosystem, including from Canon, John Deere, Microsoft Azure, Teradyne, TK Elevator and many more.\n\nThe NVIDIA Jetson ecosystem is growing rapidly, with over 1 million developers, 6,000 customers \u2014 including 2,000 startups \u2014 and 150 partners. Jetson partners offer a wide range of support from AI software, hardware and application design services to cameras, sensors and peripherals, developer tools and development systems.\n\nOrin Nano is supported by the NVIDIA JetPack\u2122 software development kit and is powered by the same NVIDIA CUDA-X\u2122 accelerated computing stack used to create breakthrough AI products in such fields as industrial IoT, manufacturing, smart cities and more.\n\nAvailability The Jetson Orin Nano modules will be available in January, starting at $199.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics and ignited the era of modern AI. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.\n\nCertain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance,", "document": "SmV0c29uIE9yaW4gTmFubyA5LzIwLzIyLnBkZg==.pdf"}, {"question": "What are some features of NVIDIA's products and technologies?", "gt_answer": "Some features of NVIDIA's products and technologies include NVIDIA Jetson Orin Nano, Jetson AGX Orin, the Orin Nano modules, NVIDIA Isaac, the NVIDIA JetPack SDK, and NVIDIA CUDA-X.", "gt_context": "features and availability of our products and technologies, including NVIDIA Jetson Orin Nano, Jetson AGX Orin, the Orin Nano modules, NVIDIA Isaac, the NVIDIA JetPack SDK and NVIDIA CUDA-X; customers and partners embracing Jetson AGX Orin; Orin Nano significantly expanding the adoption of Jetson AGX Orin; and NVIDIA\u2019s Jetson ecosystem growing rapidly are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners\u2019 products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company\u2019s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2022 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA-X, Jetson, Jetson AGX Orin, Jetson Orin Nano, NVIDIA Isaac and NVIDIA JetPack are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.\n\nDavid Pinto +1-408-566-6950 dpinto@nvidia.com", "document": "SmV0c29uIE9yaW4gTmFubyA5LzIwLzIyLnBkZg==.pdf"}, {"question": "What are some of the new games joining the GeForce NOW library in March?", "gt_answer": "In March, a total of 19 new games are joining the GeForce NOW library, including 'Disney Dreamlight Valley,' 'Monster Hunter Rise,' 'Battlefield 2042' Season 4, and 'Destiny 2' Lightfall expansion.", "gt_context": "GeForce NOW Springs Into March With 19 New Games in the Cloud, Including \u2018Disney Dreamlight Valley\u2019\n\n\u2018Monster Hunter Rise\u2019 and expansion \u2018Sunbreak,\u2019 \u2018Battlefield 2042\u2019 Season 4, \u2018Destiny 2\u2019 Lightfall expansion now available.\n\nAuthor: GeForce NOW Community\n\nMarch is already here and a new month always means new games, with a total of 19 joining the GeForce NOW library.\n\nSet off on a magical journey to restore Disney magic when Disney Dreamlight Valley joins the cloud later this month. Plus, the hunt is on with Capcom\u2019s Monster Hunter Rise now available for all members to stream, as is major new content for Battlefield 2042 and Destiny 2 .\n\nStay tuned to GFN Thursday for future updates on the first Microsoft titles coming to GeForce NOW .\n\nEmbark on a dream adventure when Disney Dreamlight Valley from Gameloft releases in the cloud on Thursday, March 16. In this life-sim adventure game, Disney and Pixar characters live in harmony until the Forgetting threatens to destroy the wonderful memories created by its inhabitants. Help restore Disney magic to the Valley and go on an enchanting journey \u2014 full of quests, exploration and beloved Disney and Pixar friends.\n\nLive the Disney dream life while collecting thousands of decorative items inspired by Disney and Pixar worlds to personalize gamers\u2019 own unique homes in the Valley. The game\u2019s latest free update, \u201cA Festival of Friendship,\u201d brings even more features, items and characters to interact with.\n\nDisney fans of all ages will enjoy seeing their favorite characters, from Disney Encanto\u2019s Mirabel to The Lion King \u2019s Scar, throughout the game when it launches in the cloud later this month. Members can jump onto their PC, Mac and other devices to start the adventure without having to worry about download times, system requirements or storage space.\n\nStarting off the month is Capcom\u2019s popular action role-playing game Monster Hunter Rise: Sunbreak, including Free Title Update 4 , which brings the return of the Elder Dragon Velkhana, lord of the tundra that freezes all in its path. The game is now available for GeForce NOW members to stream , so new and returning Hunters can seamlessly bring their monster hunting careers to the cloud.\n\nNew content is also available for members to stream this week for blockbuster titles. Eleventh Hour is the latest season release for Battlefield 2042, including a new map, specialist, weapon and vehicle to help players dominate the battle.\n\nLightfall , Destiny 2\u2019s latest expansion following last year\u2019s The Witch Queen , brings Guardians one step closer to the conclusion of the \u201cLight and Darkness saga.\u201d Experience a brand new campaign, Exotic gear and weapons, a new six-player raid, and more as players prepare for the beginning of the end.\n\nOn top of all that, here are the three new games being added this week:\n\nMonster Hunter Rise ( Steam )\n\nVoltaire: The Vegan Vampire (New release on Steam )\n\nRise of Industry (Free on Epic Games Store )", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMDIvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMi8=.pdf"}, {"question": "Which game was removed from GeForce NOW on March 1 due to a technical issue?", "gt_answer": "Command & Conquer Remastered Collection", "gt_context": "Voltaire: The Vegan Vampire (New release on Steam )\n\nRise of Industry (Free on Epic Games Store )\n\nHere\u2019s what the rest of March looks like:\n\nHotel Renovator (New release on Steam , Mar. 7)\n\nClash: Artifacts of Chaos (New release on Steam , Mar. 9)\n\nFigment 2: Creed Valley (New release on Steam , Mar. 9)\n\nMonster Energy Supercross \u2013 The Official Videogame 6 (New release on Steam , Mar. 9)\n\nBig Ambitions (New release on Steam , Mar. 10)\n\nThe Legend of Heroes: Trails to Azure (New release on Steam , Mar. 14)\n\nSmalland: Survive the Wilds (New release on Steam , Mar. 29)\n\nRavenbound (New release on Steam , Mar. 30)\n\nDREDGE (New release on Steam , Mar. 30)\n\nThe Great War: Western Front (New release on Steam , Mar. 30)\n\nSystem Shock (New release on Steam and Epic Games Store )\n\nAmberial Dreams ( Steam )\n\nDisney Dreamlight Valley ( Steam and Epic Games Store )\n\nNo One Survived ( Steam )\n\nSymphony of War: The Nephilim Saga ( Steam )\n\nTower of Fantasy ( Steam )\n\nWhile February is the shortest month, there was no shortage of games. Four extra games were added to the cloud for GeForce NOW members on top of the 25 games announced:\n\nBaldur\u2019s Gate 3 ( Steam )\n\nRecipe for Disaster (Free on Epic Games , Feb. 9-16)\n\nSons of the Forest (New release on Steam , Feb. 23)\n\nWarpips ( Epic Games Store )\n\nA few games announced didn\u2019t make it into February due to shifts in their release dates, including Above Snakes and Heads Will Roll: Reforged . Command & Conquer Remastered Collection was removed from GeForce NOW on March 1 due to a technical issue. Additionally, PERISH and the Dark and Darker playtest didn\u2019t make it to the cloud this month. Look for updates in a future GFN Thursday on some of these titles.\n\nFinally, we\u2019ve got a question to start your weekend gaming adventures. Let us know your answer in the comments below or on Twitter and Facebook .\n\nShare your favorite video game companion and why they are the best. \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) March 1, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/03/02/geforce-now-thursday-march-2/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDMvMDIvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktbWFyY2gtMi8=.pdf"}, {"question": "What updates were announced for NVIDIA Omniverse?", "gt_answer": "New features and improvements to apps including Create, Machinima, Audio2Face and Nucleus were announced for NVIDIA Omniverse.", "gt_context": "Future of Creativity on Display \u2018In the NVIDIA Studio\u2019 During SIGGRAPH Special Address Major NVIDIA Omniverse updates power 3D virtual worlds, digital twins and avatars, reliably boosted by August NVIDIA Studio Driver; #MadeInMachinima contest winner revealed.\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.\n\nA glimpse into the future of AI-infused virtual worlds was on display at SIGGRAPH \u2014 the world\u2019s largest gathering of computer graphics experts \u2014 as NVIDIA founder and CEO Jensen Huang put the finishing touches on the company\u2019s special address .\n\nAnnouncements included a host of updates to a pillar of the NVIDIA Studio software suite: NVIDIA Omniverse , a platform for 3D design collaboration and world simulation. New features and improvements to apps including Create , Machinima , Audio2Face and Nucleus will help 3D artists build virtual worlds, digital twins and avatars for the metaverse .\n\nEach month, NVIDIA Studio Driver releases provide artists, creators and 3D developers with the best performance and reliability when working with creative applications. Available now, the August NVIDIA Studio Driver gives creators peak reliability for using Omniverse and their favorite creative apps.\n\nPlus, this week\u2019s featured In the NVIDIA Studio artist, Simon Lavit, exhibits his mastery of Omniverse as the winner of the #MadeInMachinima contest . The 3D artist showcases the creative workflow for his victorious short film, Painting the Astronaut .\n\nNVIDIA Omniverse \u2014 an open platform based on Universal Scene Description (USD) for building and connecting virtual worlds \u2014 just received a significant upgrade .\n\nOmniverse Apps \u2014 including Create 2022.2 \u2014 received a major PhysX update with soft-body simulation, particle-cloth simulation and soft-contact models, delivering more realism to physically accurate virtual worlds. Added OmniLive workflows enable artists more freedom through a new collaboration interface for non-destructive USD workflows.\n\nAudio2Face 2022.1 is now available in beta, including major updates that enable AI-powered emotion control and full facial animation, delivering more realism than ever. Users can now direct emotion over time, as well as mix key emotions like joy, amazement, anger and sadness. The AI can also direct eye, teeth and tongue motion, in addition to the avatar\u2019s skin, providing an even more complete facial-animation solution.\n\nLearn additional details on these updates and more .\n\nSince he first held a pen, Simon Lavit has been an artist. Now, Lavit adds Omniverse Machinima to the list of creative tools he\u2019s mastered, as the winner of the #MadeInMachinima contest.\n\nHis entry, Painting the Astronaut , was selected by an esteemed panel of judges that included numerous creative experts.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDkvaW4tdGhlLW52aWRpYS1zdHVkaW8tYXVndXN0LTkv.pdf"}, {"question": "What is the current contest that creators can participate in?", "gt_answer": "The current contest that creators can participate in is the #ExtendOmniverse contest.", "gt_context": "Powered by a GeForce RTX 3090 GPU , Lavit\u2019s creative workflow showcases the breadth and interoperability of Omniverse, its Apps and Connectors . He used lighting and scene setting to establish the short film\u2019s changing mood, helping audiences understand the story\u2019s progression. Its introduction, for example, is bright and clear. The film then gets darker, conveying the idea of the unknown as the\n\ncharacter starts his journey.\n\nLavit storyboarded on paper before starting his digital process with the Machinima and Omniverse Create apps. He quickly turned to NVIDIA\u2019s built-in 3D asset library, filled with free content from Mount & Blade II: Bannerlord , Mechwarrior 5: Mercenaries , Squad and more \u2013 to populate the scene.\n\nThen, Lavit used Autodesk Maya to create 3D models for some of his hero assets \u2014 like the protagonist Sol\u2019s spaceship. The Maya Omniverse Connector allowed him to visualize scenes within Omniverse Create. He also benefited from RTX-accelerated ray tracing and AI denoising in Maya, resulting in highly interactive and photorealistic renders.\n\nNext, Lavit textured the models in Adobe Substance 3D, which also has an Omniverse Connector . Substance 3D uses NVIDIA Iray rendering, including for textures and substances. It also features RTX-accelerated light- and ambient-occlusion baking, which optimizes assets in seconds.\n\nLavit then returned to Machinima for final layout, animation and render. The result was composited using Adobe After Effects, with an extra layer of effects and music. What turned into the contest-winning piece of art ultimately was \u201ca pretty simple workflow to keep the complexity to a minimum,\u201d Lavit said.\n\nTo power his future creativity from anywhere, Lavit won an ASUS ProArt StudioBook 16. This NVIDIA Studio laptop packs top-of-the-line technology into a device that enables users to work on the go with world-class power from a GeForce RTX 3080 Ti Laptop GPU and beautiful 4K display.\n\nLavit, born in France and now based in the U.S., sees every project as an adventure. Living in a different country from where he was born changed his vision of art, he said. Lavit regularly finds inspiration from the French graphic novel series, The Incal , which is written by Alejandro Jodorowsky and illustrated by renowned cartoonist Jean Giraud, aka M\u0153bius.\n\nThe next generation of creative professionals is heading back to campus. Choosing the right NVIDIA Studio laptop can be tricky, but students can use this guide to find the perfect tool to power their creativity \u2014 like the Lenovo Slim 7i Pro X , an NVIDIA Studio laptop now available with a GeForce RTX 3050 Laptop GPU.\n\nWhile the #MadeInMachinima contest has wrapped, creators can graduate to an NVIDIA RTX A6000 GPU in the #ExtendOmniverse contest , running through Friday, Sept. 9, at 5 p.m. PT. Perform something akin to magic by making your own NVIDIA Omniverse Extension for a chance to win an RTX A6000 or GeForce RTX 3090 Ti GPU. Winners will be announced in September at GTC .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDkvaW4tdGhlLW52aWRpYS1zdHVkaW8tYXVndXN0LTkv.pdf"}, {"question": "Where can I access tutorials on NVIDIA Studio?", "gt_answer": "You can access tutorials on the Studio YouTube channel.", "gt_context": "Follow NVIDIA Omniverse on Instagram , Medium , Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums , and join our Discord server and Twitch channel to chat with the community.\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/08/09/in-the-nvidia-studio-august-9/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDkvaW4tdGhlLW52aWRpYS1zdHVkaW8tYXVndXN0LTkv.pdf"}, {"question": "What tools does Elara Systems use to create their 3D animations?", "gt_answer": "Elara Systems uses Adobe Substance 3D Painter, Autodesk Maya, USD Composer, and other digital content-creation apps.", "gt_context": "Meet the Omnivore: Creative Studio Aides Fight Against Sickle Cell Disease With AI-Animated Short California-based Elara Systems creates lively, educational health video using Adobe Substance 3D Painter, Autodesk Maya, USD Composer and other digital content-creation apps.\n\nAuthor: Angie Lee\n\nEditor\u2019s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.\n\nCreative studio Elara Systems doesn\u2019t shy away from sensitive subjects in its work.\n\nPart of its mission for a recent client was to use fun, captivating visuals to help normalize what could be considered a touchy health subject \u2014 and boost medical outcomes as a result.\n\nIn collaboration with Boston Scientific and the Sickle Cell Society, the Elara Systems team created a character-driven 3D medical animation using the NVIDIA Omniverse development platform for connecting 3D pipelines and building metaverse applications.\n\nThe video aims to help adolescents experiencing sickle cell disease understand the importance of quickly telling an adult or a medical professional if they\u2019re experiencing symptoms like priapism \u2014 a prolonged, painful erection that could lead to permanent bodily damage.\n\n\u201cNeedless to say, this is something that could be quite frightening for a young person to deal with,\u201d said Benjamin Samar, technical director at Elara Systems. \u201cWe wanted to make it crystal clear that living with and managing this condition is achievable and, most importantly, that there\u2019s nothing to be ashamed of.\u201d\n\nTo bring their projects to life, the Elara Systems team turns to the USD Composer app, generative AI -powered Audio2Face and Audio2Gesture , as well as Omniverse Connectors to Adobe Substance 3D Painter, Autodesk 3ds Max, Autodesk Maya and other popular 3D content-creation tools like Blender, Epic Games Unreal Engine, Reallusion iClone and Unity.\n\nFor the sickle cell project, the team relied on Adobe Substance 3D Painter to organize various 3D environments and apply custom textures to all five characters. Adobe After Effects was used to composite the rendered content into a single, cohesive short film.\n\nIt\u2019s all made possible thanks to the open and extensible Universal Scene Description (USD) framework on which Omniverse is built.\n\n\u201cUSD is extremely powerful and solves a ton of problems that many people may not realize even exist when it comes to effectively collaborating on a project,\u201d Samar said. \u201cFor example, I can build a scene in Substance 3D Painter, export it to USD format and bring it into USD Composer with a single click. Shaders are automatically generated and linked, and we can customize things further if desired.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTAvZWxhcmEtc3lzdGVtcy1vbW5pdmVyc2UtY3JlYXRvci8=.pdf"}, {"question": "What software did the Elara Systems team use to create the 3D characters and environments?", "gt_answer": "The Elara Systems team used Autodesk Maya, Adobe Substance 3D Painter, and Autodesk 3ds Max to create the 3D characters and environments.", "gt_context": "Grounding the sickle cell awareness campaign in a relatable, personal narrative was a \u201cuniquely human approach to an otherwise clinical discussion,\u201d said Samar, who has nearly two decades of industry experience spanning video production, motion graphics, 3D animation and extended reality.\n\nThe team accomplished this strategy through a 3D character named Leon \u2014 a 13-year-old soccer lover who shares his experiences about a tough day when he first learned how to manage his sickle cell disease.\n\nThe project began with detailed discussions about Sickle Cell Society\u2019s goals for the short, followed by scripting, storyboarding and creating various sketches. \u201cOnce an early concept begins to crystallize in\n\nthe artists\u2019 minds, the creative process is born and begins to build momentum,\u201d Samar said.\n\nThen, the team created rough 2D mockups using the illustration app Procreate on a tablet. This stage of the artistic process centered on establishing character outfits, proportions and other details. The final concept art was used as a clear reference to drive the rest of the team\u2019s design decisions.\n\nMoving to 3D, the Elara Systems team tapped Autodesk Maya to build, rig and fully animate the characters, as well as Adobe Substance 3D Painter and Autodesk 3ds Max to create the short\u2019s various environments.\n\n\u201cI\u2019ve found the animated point cache export option in the Omniverse Connector for Maya to be invaluable,\u201d Samar said. \u201cIt helps ensure that what we\u2019re seeing in Maya will persist when brought into USD Composer, which is where we take advantage of real-time rendering to create high-quality visuals.\u201d\n\nThe real-time rendering enabled by Omniverse was \u201ccritically important, because without it, we would have had zero chance of completing and delivering this content anywhere near our targeted deadline,\u201d the technical artist said.\n\n\u201cI\u2019m also a big fan of the Reallusion to Omniverse workflow,\u201d he added.\n\nThe Connector allows users to easily bring characters created using Reallusion iClone into Omniverse, which helps to deliver visually realistic skin shaders. And USD Composer can enable real-time performance sessions for iClone characters when live-linked with a motion-capture system.\n\n\u201cOmniverse offers so much potential to help streamline workflows for traditional 3D animation teams, and this is just scratching the surface \u2014 there\u2019s an ever-expanding feature set for those interested in robotics, digital twins, extended reality and game design,\u201d Samar said. \u201cWhat I find most assuring is the sheer speed of the platform\u2019s development \u2014 constant updates and new features are being added at a rapid pace.\u201d\n\nCreators and developers across the world can download NVIDIA Omniverse for free , and enterprise teams can use the platform for their 3D projects.\n\nCheck out artwork from other \u201cOmnivores\u201d and submit projects in the gallery . Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTAvZWxhcmEtc3lzdGVtcy1vbW5pdmVyc2UtY3JlYXRvci8=.pdf"}, {"question": "Where can I join the community and chat with others interested in NVIDIA Omniverse?", "gt_answer": "You can join the NVIDIA Omniverse community and chat with others by visiting the Omniverse forums, Discord server, and Twitch channel.", "gt_context": "Follow NVIDIA Omniverse on Instagram , Medium , Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums , and join our Discord server and Twitch channel to chat with the community.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/05/10/elara-systems-omniverse-creator/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMTAvZWxhcmEtc3lzdGVtcy1vbW5pdmVyc2UtY3JlYXRvci8=.pdf"}, {"question": "What is the Volvo EX90 SUV?", "gt_answer": "The Volvo EX90 SUV is Volvo Cars' flagship vehicle that is redesigned with a new powertrain, branding, and software-defined AI compute. It is powered by the NVIDIA DRIVE Orin and DRIVE Xavier platforms.", "gt_context": "New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE All-electric vehicle is Volvo Cars\u2019 first model powered by the high-performance DRIVE Orin compute platform.\n\nAuthor: Danny Shapiro\n\nIt\u2019s a new age for safety.\n\nVolvo Cars unveiled the Volvo EX90 SUV today in Stockholm, marking the beginning of a new era of electrification, technology and safety for the automaker. The flagship vehicle is redesigned from tip to tail \u2014 with a new powertrain, branding and software-defined AI compute \u2014 powered by the NVIDIA DRIVE Orin and DRIVE Xavier platforms.\n\nThe Volvo EX90 silhouette is in line with Volvo Cars\u2019 design principle of form following function \u2014 and looks good at the same time.\n\nUnder the hood, it\u2019s filled with cutting-edge technology for new advances in electrification, connectivity, core computing, safety and infotainment. The EX90 is the first Volvo car that is hardware-ready to deliver unsupervised autonomous driving.\n\nThese features come together to deliver an SUV that cements Volvo Cars in the next generation of software-defined vehicles.\n\n\u201cWe used technology to reimagine the entire car,\u201d said Volvo Cars CEO Jim Rowan. \u201cThe Volvo EX90 is the safest that Volvo has ever produced.\u201d\n\nThe Volvo EX90 looks smart and has the brains to back it up.\n\nVolvo Cars\u2019 proprietary software runs on NVIDIA DRIVE to operate most of the core functions inside the car, including safety, infotainment and battery management. This intelligent architecture is designed to deliver a highly responsive and enjoyable experience for every passenger in the car.\n\nDRIVE Orin and DRIVE Xavier deliver a combined 280 trillion operations per second \u2014 ample compute headroom for a software-defined architecture. The system is designed to handle the large number of applications and deep neural networks needed to achieve safety standards such as ISO 26262 ASIL-D.\n\nThe Volvo EX90 isn\u2019t just a new car. It\u2019s a highly advanced computer on wheels, designed to improve over time as Volvo Cars adds more software features.\n\nThe Volvo EX90 is just the beginning of Volvo Cars\u2019 plans for the software-defined future.\n\nThe automaker plans to launch a new EV every year through 2025, with the end goal of having a purely electric, software-defined lineup by 2030.\n\nThe new flagship SUV is available for preorder in select markets, launching the next phase in Volvo Cars\u2019 leadership in premium design and safety.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/09/volvo-ex90-suv-ai-nvidia-drive/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMDkvdm9sdm8tZXg5MC1zdXYtYWktbnZpZGlhLWRyaXZlLw==.pdf"}, {"question": "Which companies collaborated in the tests with NVIDIA BlueField DPUs?", "gt_answer": "Ericsson, Red Hat, and VMware.", "gt_context": "Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency\n\nIn tests with Ericsson, Red Hat and VMware, data processing units enabled faster, more energy-efficient networks.\n\nAuthor: John Kim\n\nThe numbers are in, and they paint a picture of data centers going a deeper shade of green, thanks to energy-efficient networks accelerated with data processing units ( DPUs ).\n\nA suite of tests run with help from Ericsson, Red Hat and VMware show power reductions up to 24% on servers using NVIDIA BlueField-2 DPUs . In one case, they delivered 54x the performance of CPUs.\n\nThe work, described in a recent whitepaper , offloaded core networking jobs from power-hungry host processors to DPUs designed to run them with greater power efficiency .\n\nAccelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient. It\u2019s the latest of a handful of optimizations, described in the whitepaper, for data centers moving into the era of green computing .\n\nSeeing the trend toward energy-efficient networks, VMware enabled DPUs to run its virtualization software, used by thousands of companies worldwide. NVIDIA has run several tests with VMware since its vSphere 8 software release this fall.\n\nFor example, on VMware vSphere Distributed Services Engine \u2014 software that offloads and accelerates networking and security functions using DPUs \u2014 BlueField-2 delivered higher performance while freeing up 20% of the CPU\u2019s resources required without DPUs.\n\nThat means users can deploy fewer servers to run the same workload, or run more applications on the same servers.\n\nFew data centers face a more demanding job than those run by telecoms providers. Their networks shuttle every bit of data that smartphone users generate or request between their cellular networks and the internet.\n\nResearchers at Ericsson tested whether operators could reduce their power consumption on this massive workload using SmartNICs , the network interface cards that handle DPU functions. Their test let CPUs slow down or sleep while an NVIDIA ConnectX SmartNIC handled the networking tasks.\n\nThe results, detailed in a recent article , were stunning.\n\nEnergy consumption of server CPUs fell 24%, from 190 to 145 watts on a fully loaded network. This single DPU application could cut power costs by nearly $2 million over three years for a large data center.\n\nIn the article, Ericsson\u2019s CTO, Erik Ekudden, underscored the importance of the work.\n\n\u201cThere\u2019s a growing sense of urgency among communication service providers to find and implement innovative solutions that reduce network energy consumption,\u201d he wrote. And the DPU techniques \u201csave energy across a wide range of traffic conditions.\u201d\n\nResults were even more dramatic for tests on Red Hat OpenShift , used by half of all Fortune 500 banks, airlines and telcos to manage software containers.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMDMvYmx1ZWZpZWxkLWRwdXMtZW5lcmd5LWVmZmljaWVuY3kv.pdf"}, {"question": "What benefits did BlueField-2 DPUs provide in terms of networking demands on CPUs?", "gt_answer": "BlueField-2 DPUs slashed networking demands on CPUs by 70%, freeing them up to run other applications. Additionally, they accelerated networking jobs by a whopping 54x.", "gt_context": "In the tests, BlueField-2 DPUs handled virtualization, encryption and networking jobs needed to manage these portable packages of applications and code.\n\nThe DPUs slashed networking demands on CPUs by 70%, freeing them up to run other applications. What\u2019s more, they accelerated networking jobs by a whopping 54x.\n\nA technical blog provides more detail on the tests.\n\nAcross every industry, businesses are embracing a philosophy of zero trust to improve network security. So, NVIDIA tested IPsec, one of the most popular data center encryption protocols, on BlueField DPUs.\n\nThe test showed data centers could improve performance and cut power consumption 21% for servers and 34% for clients on networks running IPsec on DPUs. For large data centers, that could translate to nearly $9 million in savings on electric bills over three years.\n\nNVIDIA and its partners continue to put DPUs to the test in an expanding portfolio of use cases, but the big picture is clear.\n\n\u201cIn a world facing rising energy costs and rising demand for green IT infrastructure, the use of DPUs will become increasingly popular,\u201d the whitepaper concludes.\n\nIt\u2019s good to know the numbers, but seeing is believing. So apply to run your own test of DPUs on VMware\u2019s vSphere.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/03/bluefield-dpus-energy-efficiency/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMDMvYmx1ZWZpZWxkLWRwdXMtZW5lcmd5LWVmZmljaWVuY3kv.pdf"}, {"question": "Why is the Grace Hopper Superchip ideal for next-gen recommender systems?", "gt_answer": "The Grace Hopper Superchip is ideal for next-gen recommender systems because it can process more data than any other processor on the planet. It includes a superfast chip-to-chip interconnect and provides 7x more bandwidth than PCIe Gen 5. It also offers the largest pools of GPU memory ever and is highly energy efficient.", "gt_context": "Why the New NVIDIA Grace Hopper Superchip Is Ideal for Next-Gen Recommender Systems\n\nPerformance of the massive AI models that help users personalize the internet will hit new levels of accuracy with the Grace Hopper Superchip.\n\nAuthor: Paresh Kharya\n\nRecommender systems, the economic engines of the internet, are getting a new turbocharger: the NVIDIA Grace Hopper Superchip .\n\nEvery day, recommenders serve up trillions of search results, ads, products, music and news stories to billions of people. They\u2019re among the most important AI models of our time because they\u2019re incredibly effective at finding in the internet\u2019s pandemonium the pearls users want.\n\nThese machine learning pipelines run on data, terabytes of it. The more data recommenders consume, the more accurate their results and the more return on investment they deliver.\n\nTo process this data tsunami, companies are already adopting accelerated computing to personalize services for their customers. Grace Hopper will take their advances to the next level.\n\nPinterest, the image-sharing social media company, was able to move to 100x larger recommender models by adopting NVIDIA GPUs. That increased engagement by 16% for its more than 400 million users.\n\n\u201cNormally, we would be happy with a 2% increase, and 16% is just a beginning,\u201d a software engineer at the company said in a recent blog . \u201cWe see additional gains \u2014 it opens a lot of doors for opportunities.\u201d\n\nThe next generation of the NVIDIA AI platform promises even greater gains for companies processing massive datasets with super-sized recommender models.\n\nBecause data is the fuel of AI, Grace Hopper is designed to pump more data through recommender systems than any other processor on the planet.\n\nGrace Hopper achieves this because it\u2019s a superchip \u2014 two chips in one unit, sharing a superfast chip-to-chip interconnect. It\u2019s an Arm-based NVIDIA Grace CPU and a Hopper GPU that communicate over NVIDIA NVLink-C2C .\n\nWhat\u2019s more, NVLink also connects many superchips into a super system, a computing cluster built to run terabyte-class recommender systems.\n\nNVLink carries data at a whopping 900 gigabytes per second \u2014 7x the bandwidth of PCIe Gen 5, the interconnect most leading edge upcoming systems will use.\n\nThat means Grace Hopper feeds recommenders 7x more of the embeddings \u2014 data tables packed with context \u2014 that they need to personalize results for users.\n\nThe Grace CPU uses LPDDR5X, a type of memory that strikes the optimal balance of bandwidth, energy efficiency, capacity and cost for recommender systems and other demanding workloads. It provides 50% more bandwidth while using an eighth of the power per gigabyte of traditional DDR5 memory subsystems.\n\nAny Hopper GPU in a cluster can access Grace\u2019s memory over NVLink. It\u2019s a feature of Grace Hopper that provides the largest pools of GPU memory ever.\n\nIn addition, NVLink-C2C requires just 1.3 picojoules per bit transferred, giving it more than 5x the energy efficiency of PCIe Gen 5.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvZ3JhY2UtaG9wcGVyLXJlY29tbWVuZGVyLXN5c3RlbXMv.pdf"}, {"question": "What is NVIDIA Merlin?", "gt_answer": "NVIDIA Merlin is a collection of models, methods, and libraries for building AI systems that can provide better predictions and increase clicks.", "gt_context": "The overall result is recommenders get a further up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional CPUs (see chart below).\n\nThe Grace Hopper Superchip runs the full stack of NVIDIA AI software used in some of the world\u2019s largest recommender systems today.\n\nNVIDIA Merlin is the rocket fuel of recommenders, a collection of models, methods and libraries for building AI systems that can provide better predictions and increase clicks.\n\nNVIDIA Merlin HugeCTR , a recommender framework, helps users process massive datasets fast across distributed GPU clusters with help from the NVIDIA Collective Communications Library .\n\nLearn more about Grace Hopper and NVLink in this technical blog . Watch this GTC session to learn more about building recommender systems.\n\nYou can also hear NVIDIA CEO and co-founder Jensen Huang provide perspective on recommenders here or watch the full GTC keynote below.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/09/20/grace-hopper-recommender-systems/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDkvMjAvZ3JhY2UtaG9wcGVyLXJlY29tbWVuZGVyLXN5c3RlbXMv.pdf"}, {"question": "What is the purpose of generative AI?", "gt_answer": "Generative AI can simplify time-consuming tasks or accelerate 3D workflows to boost creativity and productivity across industries.", "gt_context": "Generative AI at GTC: Dozens of Sessions to Feature Luminaries Speaking on Tech\u2019s Hottest Topic Hear how Adobe, MoMA, OpenAI, Sony Pictures Animation and others are tapping into the power of generative AI technologies.\n\nAuthor: Richard Kerris\n\nAs the meteoric rise of ChatGPT demonstrates, generative AI can unlock enormous potential for companies, teams and individuals.\n\nWhether simplifying time-consuming tasks or accelerating 3D workflows to boost creativity and productivity, generative AI is already making an impact across industries \u2014 and there\u2019s much more to come.\n\nHow generative AI is paving the way for the future will be a key topic at NVIDIA GTC , a free, global conference for the era of AI and the metaverse, taking place online March 20-23.\n\nDozens of sessions will dive into topics around generative AI \u2014 from conversational text to the creation of virtual worlds from images. Here\u2019s a sampling:\n\nFireside Chat With NVIDIA founder and CEO Jensen Huang and OpenAI\u2019s Ilya Suskever : Join this conversation to learn more about the future of AI.\n\nHow Generative AI Is Transforming the Creative Process : In this fireside chat, Scott Belsky, chief product officer at Adobe, and Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, will discuss the powerful impact and future direction of generative AI.\n\nGenerative AI Demystified : Discover how generative AI enables businesses to improve products and services. NVIDIA\u2019s Bryan Catanzaro will discuss major developments in generative AI and share popular use cases driving cutting-edge generative applications.\n\nGenerating Modern Masterpieces: MoMA Dreams Become a Reality : Hear from multimedia artist Refik Anadol, as well as Museum of Modern Art curators Michelle Kuo and Paola Antonelli, who\u2019ll discuss how AI helped transform the archive of data from New York\u2019s legendary modern art museum into a real-time art piece \u2014 the first of its kind in a major art museum.\n\nHow Generative AI Will Transform the Fashion Industry : See examples of how the latest generative tools are used in fashion, and hear from experts on their experiences in building a practice based on AI.\n\nEmerging Tech in Animation Pre-Production : Learn how Sony Pictures Animation is using generative AI to improve the creative pre-production and storytelling processes.\n\n3D by AI: How Generative AI Will Make Building Virtual Worlds Easier : See some of NVIDIA\u2019s latest work in generative AI models for creating 3D content and scenes, and explore how these tools and research can help 3D artists in their workflows.\n\nMany more sessions on generative AI are available to explore at GTC, and registration is free. Join to discover the latest AI technology innovations and breakthroughs.\n\nFeatured image courtesy of Refik Anadol.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/02/28/generative-ai-gtc/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDIvMjgvZ2VuZXJhdGl2ZS1haS1ndGMv.pdf"}, {"question": "What is Neuralangelo's ability in 3D reconstruction?", "gt_answer": "Neuralangelo can generate lifelike virtual replicas of buildings, sculptures, and other real-world objects using neural networks.", "gt_context": "Digital Renaissance: NVIDIA Neuralangelo Research Reconstructs 3D Scenes\n\nAuthor: Isha Salian\n\nEditor\u2019s note: Neuralangelo is now available on GitHub .\n\nNeuralangelo, a new AI model by NVIDIA Research for 3D reconstruction using neural networks, turns 2D video clips into detailed 3D structures \u2014 generating lifelike virtual replicas of buildings, sculptures and other real-world objects.\n\nLike Michelangelo sculpting stunning, life-like visions from blocks of marble, Neuralangelo generates 3D structures with intricate details and textures. Creative professionals can then import these 3D objects into design applications, editing them further for use in art, video game development, robotics and industrial digital twins .\n\nNeuralangelo\u2019s ability to translate the textures of complex materials \u2014 including roof shingles, panes of glass and smooth marble \u2014 from 2D videos to 3D assets significantly surpasses prior methods. The high fidelity makes its 3D reconstructions easier for developers and creative professionals to rapidly create usable virtual objects for their projects using footage captured by smartphones.\n\n\u201cThe 3D reconstruction capabilities Neuralangelo offers will be a huge benefit to creators, helping them recreate the real world in the digital world,\u201d said Ming-Yu Liu, senior director of research and co-author on the paper. \u201cThis tool will eventually enable developers to import detailed objects \u2014 whether small statues or massive buildings \u2014 into virtual environments for video games or industrial digital twins.\u201d\n\nIn a demo, NVIDIA researchers showcased how the model could recreate objects as iconic as Michelangelo\u2019s David and as commonplace as a flatbed truck. Neuralangelo can also reconstruct building interiors and exteriors \u2014 demonstrated with a detailed 3D model of the park at NVIDIA\u2019s Bay Area campus.\n\nPrior AI models to reconstruct 3D scenes have struggled to accurately capture repetitive texture patterns, homogenous colors and strong color variations. Neuralangelo adopts instant neural graphics primitives, the technology behind NVIDIA Instant NeRF , to help capture these finer details.\n\nUsing a 2D video of an object or scene filmed from various angles, the model selects several frames that capture different viewpoints \u2014 like an artist considering a subject from multiple sides to get a sense of depth, size and shape.\n\nOnce it\u2019s determined the camera position of each frame, Neuralangelo\u2019s AI creates a rough 3D representation of the scene, like a sculptor starting to chisel the subject\u2019s shape.\n\nThe model then optimizes the render to sharpen the details, just as a sculptor painstakingly hews stone to mimic the texture of fabric or a human figure.\n\nThe final result is a 3D object or large-scale scene that can be used in virtual reality applications, digital twins or robotics development.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMDEvbmV1cmFsYW5nZWxvLWFpLXJlc2VhcmNoLTNkLXJlY29uc3RydWN0aW9uLw==.pdf"}, {"question": "What is the purpose of the DiffCollage project?", "gt_answer": "The purpose of the DiffCollage project is to create large-scale content using a diffusion method.", "gt_context": "Neuralangelo is one of nearly 30 projects by NVIDIA Research to be presented at the Conference on Computer Vision and Pattern Recognition (CVPR), taking place June 18-22 in Vancouver. The papers span topics including pose estimation, 3D reconstruction and video generation.\n\nOne of these projects, DiffCollage , is a diffusion method that creates large-scale content \u2014 including long landscape orientation, 360-degree panorama and looped-motion images. When fed a training dataset of images with a standard aspect ratio, DiffCollage treats these smaller images as sections of a\n\nlarger visual \u2014 like pieces of a collage. This enables diffusion models to generate cohesive-looking large content without being trained on images of the same scale.\n\nThe technique can also transform text prompts into video sequences, demonstrated using a pretrained diffusion model that captures human motion: n n n n\n\nLearn more about NVIDIA Research at CVPR .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/06/01/neuralangelo-ai-research-3d-reconstruction/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDYvMDEvbmV1cmFsYW5nZWxvLWFpLXJlc2VhcmNoLTNkLXJlY29uc3RydWN0aW9uLw==.pdf"}, {"question": "What technology does Orbital Sidekick use to detect gas leaks?", "gt_answer": "Orbital Sidekick uses hyperspectral intelligence technology to detect gas leaks.", "gt_context": "Braced From Space: Startup Keeps Watchful Eye on Gas Pipeline Leaks Across the Globe Processing hyperspectral imagery with NVIDIA edge AI technology, NVIDIA Inception member Orbital Sidekick has detected hundreds of suspected gas and hydrocarbon leaks.\n\nAuthor: Angie Lee\n\nAs its name suggests, Orbital Sidekick is creating technology that acts as a buddy in outer space, keeping an eye on the globe using satellites to help keep it safe and sustainable.\n\nThe San Francisco-based startup, a member of the NVIDIA Inception program, enables commercial and government users to optimize sustainable operations and security with hyperspectral intelligence \u2014 information collected from across the electromagnetic spectrum.\n\n\u201cSpace-based hyperspectral intelligence basically breaks up the spectrum of light so it\u2019s possible to see what\u2019s happening at a chemical level without needing an aircraft,\u201d said Kaushik Bangalore, vice president of payload engineering at Orbital Sidekick, or OSK.\n\nFounded in 2016, OSK is among the first to use hyperspectral intelligence to detect hydrocarbon or gas leaks. These are some of the world\u2019s most pressing energy issues \u2014 6,000 U.S. pipeline incidents from 2002-2021 resulted in over $11 billion in damages.\n\n\u201cPrevious industry-standard ways of detecting such issues were unreliable as they used small aircraft and pilots looking out the window for leaks, depending on the trained eye rather than sensors or other technologies,\u201d said Bangalore.\n\nOSK operates a constellation of satellites that collect hyperspectral imagery from space. That data is processed and analyzed in real time using the NVIDIA Jetson edge AI platform . Then, insights \u2014 like the type of leak at a GPS point, its size and its urgency \u2014 can be viewed on a screen by users of OSK\u2019s SIGMA Monitor platform.\n\nThe technology accomplishes what a pilot would, but much more quickly, objectively and with higher accuracy, Bangalore said.\n\nOSK technologies have so far monitored more than 20,000 kilometers of pipelines for various customers, according to Tushar Prabhakar, its founder and chief operating officer.\n\nThe platform has detected nearly 100 suspected methane leaks, 200 suspected liquid hydrocarbon leaks or contamination issues, and more than 300 intrusive events related to digging or construction activities, Prabhakar added. OSK helped eliminate the potential for these events to become serious energy crises.\n\n\u201cWe\u2019re taking hyperspectral intelligence to the finest commercial resolution that the world has ever seen to make the Earth a more sustainable place,\u201d Bangalore said. \u201cThe biggest challenge with hyperspectral imagery is dealing with huge amounts of data, which can be up to 400x the size of 2D visual data. NVIDIA technology helps process this data in real time.\u201d\n\nOSK uses the NVIDIA Jetson AGX Xavier module as an AI engine at the satellites\u2019 edge to process the hyperspectral data collected from various sensors and crunch algorithms for leak detection.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjUvb3JiaXRhbC1zaWRla2ljay8=.pdf"}, {"question": "What is the projected size of the EV battery market by 2027?", "gt_answer": "The EV battery market is projected to reach over $218 billion in 2027.", "gt_context": "The module, along with the NVIDIA CV-CUDA and CUDA Python software toolkits, have sped up OSK\u2019s analysis by 5x, according to Bangalore. This acceleration enhances the platform\u2019s ability to detect and recognize anomalies from space \u2014 then project the data back to Earth.\n\n\u201cThere are around 15 sun-synchronous orbits per day,\u201d Bangalore said. \u201cWith NVIDIA Jetson AGX Xavier, we can process all the data taken onboard a satellite in an orbit within that same orbit, enabling continuous data capture.\u201d\n\nIn 2018, OSK\u2019s previous-generation system was launched on the International Space Station. Its data was analyzed using the NVIDIA Jetson TX2 module.\n\nIn addition, OSK uses the next-generation NVIDIA Jetson AGX Orin module for an aerial version of the platform that collects hyperspectral imagery from airplanes. Compared to the previous-generation module, the Jetson AGX Orin \u2014 with upgraded memory and speed \u2014 can run larger amounts of map data streamed in real time to pilots, Bangalore said.\n\n\u201cWe chose the NVIDIA Jetson platform because it offers off-the-shelf products for industrial applications with extended shock, vibration and temperature, and software that has been optimized for the NVIDIA GPU architecture,\u201d Bangalore said.\n\nAnd as a member of NVIDIA Inception, a free, global program for cutting-edge startups, OSK received technical support to optimize the team\u2019s use of such safety features and SDK acceleration.\n\nHyperspectral intelligence offers a multitude of applications. For this reason, the OSK platform is deployed across a broad range of customers, including the U.S. Department of Defense and energy sector.\n\nEnergy Transfer, a major pipeline operator, will use OSK\u2019s GHOSt constellation for asset monitoring.\n\nFor the commercial oil and gas industry, OSK technology helps detect gas and hydrocarbon leaks, allowing pipeline operators to quickly halt work and fix issues.\n\nTo accelerate the energy transition, the platform can enhance exploration of lithium, cobalt and more, display a hyperspectral index of areas on a map that have signals of the elements, and differentiate between these materials and soil.\n\nCreating sustainable supply chains for battery materials like lithium is key to advancing the global energy transition and scaling electric vehicle adoption, as lithium-ion batteries power the majority of EVs. The EV battery market is projected to reach over $218 billion in 2027 , and EV sales are estimated to reach up to 50 million units by 2030 .\n\n\u201cOur tech can help discover lithium, and prevent methane or greenhouse gasses from being let out into the atmosphere,\u201d Bangalore said. \u201cIt\u2019s a very direct impact, and it\u2019s what the planet needs.\u201d\n\nRead more about innovative energy startups, including MinervaCQ , which is using speech AI to coach contact-center agents in retail energy, and Skycatch , which is building digital twins to make mining and construction sites safer, more efficient and sustainable.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjUvb3JiaXRhbC1zaWRla2ljay8=.pdf"}, {"question": "What can I learn about NVIDIA from the given article?", "gt_answer": "You can learn more about NVIDIA\u2019s work in energy and how to apply to join NVIDIA Inception.", "gt_context": "Learn more about NVIDIA\u2019s work in energy and apply to join NVIDIA Inception .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/01/25/orbital-sidekick/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDEvMjUvb3JiaXRhbC1zaWRla2ljay8=.pdf"}, {"question": "What is MONAI Deploy?", "gt_answer": "MONAI Deploy is a way of packaging an AI model that makes it easy to deploy in an existing healthcare ecosystem.", "gt_context": "MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps\n\nMedical-imaging leaders, including UCSF, Cincinnati Children\u2019s Hospital and startup Qure.ai, are adopting MONAI Deploy to turn research breakthroughs into clinical impact.\n\nAuthor: David Niewolny\n\nDelivering AI-accelerated healthcare at scale will take thousands of neural networks working together to cover the breadth of human physiology, diseases and even hospital operations \u2014 a significant challenge in today\u2019s smart hospital environment.\n\nMONAI , an open-source medical-imaging AI framework with more than 650,000 downloads, accelerated by NVIDIA, is making it easier to integrate these models into clinical workflows with MONAI Application Packages, or MAPs.\n\nDelivered through MONAI Deploy , a MAP is a way of packaging an AI model that makes it easy to deploy in an existing healthcare ecosystem .\n\n\u201cIf someone wanted to deploy several AI models in an imaging department to help experts identify a dozen different conditions, or partially automate the creation of medical imaging reports, it would take an untenable amount of time and resources to get the right hardware and software infrastructure for each one,\u201d said Dr. Ryan Moore at Cincinnati Children\u2019s Hospital. \u201cIt used to be possible, but not feasible.\u201d\n\nMAPs simplify the process. When a developer packages an app using the MONAI Deploy Application software development kit, hospitals can easily run it on premises or in the cloud. The MAPs specification also integrates with healthcare IT standards such as DICOM for medical imaging interoperability.\n\n\u201cUntil now, most AI models would remain in an R&D; loop, rarely reaching patient care,\u201d said Jorge Cardoso, chief technology officer at the London Medical Imaging & AI Centre for Value-Based Healthcare. \u201cMONAI Deploy will help break that loop, making impactful clinical AI a more frequent reality.\u201d\n\nHealthcare institutions, academic medical centers and AI software developers around the world worldwide are adopting MONAI Deploy, including:\n\nCincinnati Children\u2019s Hospital : The academic medical center is creating a MAP for an AI model that automates total cardiac volume segmentation from CT images, aiding pediatric heart transplant patients in a project funded by the National Institutes of Health .\n\nNational Health Service in England : The NHS Trusts have deployed its MONAI-based AI Deployment Engine platform, known as AIDE, across four hospitals to provide AI-enabled disease-detection tools to healthcare professionals serving 5 million patients a year.\n\nQure.ai : A member of the NVIDIA Inception program for startups, Qure.ai develops medical imaging AI models for use cases including lung cancer, traumatic brain injuries and tuberculosis. The company is using MAPs to package its solutions for deployment, accelerating its time to clinical impact.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMjgvbW9uYWktZGVwbG95LWZyYW1ld29yay1tZWRpY2FsLWltYWdpbmctYWktYXBwcy8=.pdf"}, {"question": "Which healthcare imaging data platform has integrated MONAI Deploy to enable clinicians to deploy AI-assisted annotation tools?", "gt_answer": "Google Cloud's Medical Imaging Suite", "gt_context": "SimBioSys : The Chicago-based Inception startup builds 3D virtual representations of patients\u2019 tumors and is using MAPs for precision medicine AI applications that can help predict how a patient will respond to a specific treatment.\n\nUniversity of California, San Francisco: UCSF is developing MAPs for several AI models, with applications including hip fracture detection, liver and brain tumor segmentation, and knee and breast\n\ncancer classification.\n\nThe MAP specification was developed by the MONAI Deploy working group, a team of experts from more than a dozen medical imaging institutions, to benefit AI app developers as well as the clinical and infrastructure platforms that run AI apps.\n\nFor developers, MAPs can help accelerate AI model evolution by helping researchers easily package and test their models in a clinical environment. This allows them to collect real-world feedback that helps improve the AI.\n\nFor cloud service providers, supporting MAPs \u2014 which were designed using cloud-native technologies \u2014 enables researchers and companies using MONAI Deploy to run AI applications on their platform, either by using containers or with native app integration. Cloud platforms integrating MONAI Deploy and MAPs include:\n\nAmazon HealthLake Imaging : The MAP connector has been integrated with the HealthLake Imaging service, allowing clinicians to view, process and segment medical images in real time.\n\nGoogle Cloud : Google Cloud\u2019s Medical Imaging Suite , designed to make healthcare imaging data more accessible, interoperable and useful, has integrated MONAI into its platform to enable clinicians to deploy AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images.\n\nNuance Precision Imaging Network, powered by Microsoft Azure : Nuance and NVIDIA recently announced a partnership bringing together MONAI and the Nuance Precision Imaging Network, a cloud platform that provides more than 12,000 healthcare facilities with access to AI-powered tools and insights.\n\nOracle Cloud Infrastructure : Oracle and NVIDIA recently announced a collaboration to bring accelerated compute solutions for healthcare, including MONAI Deploy, to Oracle Cloud Infrastructure. Developers can start building MAPs with MONAI Deploy today using NVIDIA containers on the Oracle Cloud Marketplace.\n\nGet started with MONAI and discover how NVIDIA is helping build AI-powered medical imaging ecosystems at this week\u2019s RSNA conference .\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/11/28/monai-deploy-framework-medical-imaging-ai-apps/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTEvMjgvbW9uYWktZGVwbG95LWZyYW1ld29yay1tZWRpY2FsLWltYWdpbmctYWktYXBwcy8=.pdf"}, {"question": "What tool does Industrial Light & Magic (ILM) use to search through their asset library?", "gt_answer": "ILM uses Omniverse DeepSearch to search through their asset library.", "gt_context": "As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky Omniverse AI-enabled search tool lets legendary studio sift through massive database of 3D scenes.\n\nAuthor: Richard Kerris\n\nFor cutting-edge visual effects and virtual production, creative teams and studios benefit from digital sets and environments that can be updated in real time.\n\nA crucial element in any virtual production environment is a sky dome, often used to provide realistic lighting for virtual environments and in-camera visual effects. Legendary studio Industrial Light & Magic (ILM) is tapping into the power of AI to take its skies to new heights with NVIDIA AI-enabled DeepSearch and Omniverse Enterprise .\n\nCapturing photorealistic details of a sky can be tricky. At SIGGRAPH today, ILM showcased how its team, with the NVIDIA DeepSearch tool, used natural language to rapidly search through a massive asset library and create a captivating sky dome.\n\nThe video shows how Omniverse Enterprise can provide filmmakers with the ultimate flexibility to develop the ideal look and lighting to further their stories. This helps artists save time, enhance productivity and accelerate creativity for virtual production.\n\nAfter narrowing down their search results, the ILM team auditions the remaining sky domes in virtual reality to assess whether the asset will be a perfect match for the shot. By using VR, ILM can approximate what the skies will look like on a virtual production set.\n\nAn extensive library with thousands of references and 3D assets offers advantages, but it also presents some challenges without an efficient way to search through all the data.\n\nTypically, users set up folders or tag items with keywords, which can be incredibly time consuming. This is especially true for a studio like ILM, which has over 40 years\u2019 worth of material in its reference library, including photography, matte paintings, backdrops and other materials that have been captured over the decades.\n\nWith hundreds of thousands of untagged pieces of content, it\u2019s impractical for the ILM team to manually search through them on a production schedule.\n\nOmniverse DeepSearch, however, lets ILM search intuitively through untagged assets using text or a 2D image. DeepSearch uses AI to categorize and find images automatically \u2014 this results in massive time savings for the creative team, removing the need to manually tag each asset.\n\n\u201cWith Omniverse DeepSearch, we have the ability to search through data in real time, which is key for production,\u201d said Landis Fields, real time principal creative at ILM. \u201cAnd being able to search through assets with natural language allows for our creative teams to easily find what they\u2019re looking for, helping them achieve the final look and feel of a scene much more efficiently than before.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDkvaWxtLW9tbml2ZXJzZS1kZWVwc2VhcmNoLw==.pdf"}, {"question": "What can the ILM team do with DeepSearch and Omniverse Enterprise?", "gt_answer": "The ILM team can review search results, bring images into the 3D space, and interact with the 3D environment using a VR headset.", "gt_context": "DeepSearch also works on USD files, so the ILM team can review search results and bring images into the 3D space in Omniverse Enterprise. The artists could then interact with the 3D environment using a VR headset.\n\nWith NVIDIA DeepSearch and Omniverse Enterprise, ILM has the potential to accelerate creative pipelines, lower costs and enhance production workflows to create captivating content for virtual productions.\n\nJoin NVIDIA at SIGGRAPH to learn more about the latest Omniverse announcements, watch the company\u2019s special address on demand and see the global premiere of NVIDIA\u2019s documentary, The Art of Collaboration: NVIDIA, Omniverse, and GTC , on Wednesday, Aug. 10, at 10 a.m. PT.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/08/09/ilm-omniverse-deepsearch/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMDgvMDkvaWxtLW9tbml2ZXJzZS1kZWVwc2VhcmNoLw==.pdf"}, {"question": "What is PennyLane?", "gt_answer": "PennyLane is a quantum programming framework from Xanadu that adapts deep learning techniques and tools to program quantum computers.", "gt_context": "A Quantum Boost: cuQuantum With PennyLane Lets Simulations Ride Supercomputers Scientists are accelerating quantum simulations for the first time at supercomputing scale, thanks to NVIDIA cuQuantum with Xanadu\u2019s PennyLane.\n\nAuthor: Sam Stanwyck\n\nTen miles in from Long Island\u2019s Atlantic coast, Shinjae Yoo is revving his engine.\n\nThe computational scientist and machine learning group lead at the U.S. Department of Energy\u2019s Brookhaven National Laboratory is one of many researchers gearing up to run quantum computing simulations on a supercomputer for the first time, thanks to new software.\n\nYoo\u2019s engine, the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), is using the latest version of PennyLane , a quantum programming framework from Toronto-based Xanadu. The open-source software, which builds on the NVIDIA cuQuantum software development kit , lets simulations run on high-performance clusters of NVIDIA GPUs.\n\nThe performance is key because researchers like Yoo need to process ocean-size datasets. He\u2019ll run his programs across as many as 256 NVIDIA A100 Tensor Core GPUs on Perlmutter to simulate about three dozen qubits \u2014 the powerful calculators quantum computers use.\n\nThat\u2019s about twice the number of qubits most researchers can model these days.\n\nThe so-called multi-node version of PennyLane, used in tandem with the NVIDIA cuQuantum SDK, simplifies the complex job of accelerating massive simulations of quantum systems.\n\n\u201cThis opens the door to letting even my interns run some of the largest simulations \u2014 that\u2019s why I\u2019m so excited,\u201d said Yoo, whose team has six projects using PennyLane in the pipeline.\n\nHis work aims to advance high-energy physics and machine learning. Other researchers use quantum simulations to take chemistry and materials science to new levels.\n\nQuantum computing is alive in corporate R&D; centers, too.\n\nFor example, Xanadu is helping companies like Rolls-Royce develop quantum algorithms to design state-of-the-art jet engines for sustainable aviation and Volkswagen Group invent more powerful batteries for electric cars.\n\nFour More Projects on Perlmutter\n\nMeanwhile, at NERSC, at least four other projects are in the works this year using multi-node Pennylane, according to Katherine Klymko, who leads the quantum computing program there. They include efforts from NASA Ames and the University of Alabama.\n\n\u201cResearchers in my field of chemistry want to study molecular complexes too large for classical computers to handle,\u201d she said. \u201cTools like Pennylane let them extend what they can currently do classically to prepare for eventually running algorithms on large-scale quantum computers.\u201d\n\nPennyLane is the product of a novel idea. It adapts popular deep learning techniques like backpropagation and tools like PyTorch to programming quantum computers.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTIvcXVhbnR1bS1zdXBlcmNvbXB1dGVycy1wZW5ueWxhbmUv.pdf"}, {"question": "What is the purpose of Xanadu's PennyLane software?", "gt_answer": "The purpose of Xanadu's PennyLane software is to design code that can run across many types of quantum computers.", "gt_context": "Xanadu designed the code to run across as many types of quantum computers as possible, so the software got traction in the quantum community soon after its introduction in a 2018 paper .\n\n\u201cThere was engagement with our content, making cutting-edge research accessible, and people got excited,\u201d recalled Josh Izaac, director of product at Xanadu and a quantum physicist who was an author\n\nof the paper and a developer of PennyLane.\n\nA common comment on the PennyLane forum these days is, \u201cI want more qubits,\u201d said Lee J. O\u2019Riordan, a senior quantum software developer at Xanadu, responsible for PennyLane\u2019s performance.\n\n\u201cWhen we started work in 2022 with cuQuantum on a single GPU, we got 10x speedups pretty much across the board \u2026 we hope to scale by the end of the year to 1,000 nodes \u2014 that\u2019s 4,000 GPUs \u2014 and that could mean simulating more than 40 qubits,\u201d O\u2019Riordan said.\n\nScientists are still formulating the questions they\u2019ll address with that performance \u2014 the kind of problem they like to have.\n\nCompanies designing quantum computers will use the boost to test ideas for building better systems. Their work feeds a virtuous circle, enabling new software features in PennyLane that, in turn, enable more system performance.\n\nO\u2019Riordan saw early on that GPUs were the best vehicle for scaling PennyLane\u2019s performance. He co-authored last year a paper on a method for splitting a quantum program across more than 100 GPUs to simulate more than 60 qubits, split into many 30 qubit sub-circuits.\n\n\u201cWe wanted to extend our work to even larger workloads, so when we heard NVIDIA was adding multi-node capability to cuQuantum, we wanted to support it as soon as possible,\u201d he said.\n\nWithin four months, multi-node PennyLane was born.\n\n\u201cFor a big, distributed GPU project, that was a great turnaround time. Everyone working on cuQuantum helped make the integration as easy as possible,\u201d O\u2019Riordan said.\n\nA Xanadu blog details how developers can simulate large-scale systems with more than 30 qubits using PennyLane and cuQuantum.\n\nThe team is still collecting data, but so far on \u201csample-based workloads, we see almost linear scaling,\u201d he said.\n\nOr, as NVIDIA founder and CEO Jensen Huang might say, \u201c The more you buy, the more you save .\u201d\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/09/12/quantum-supercomputers-pennylane/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTIvcXVhbnR1bS1zdXBlcmNvbXB1dGVycy1wZW5ueWxhbmUv.pdf"}, {"question": "How are researchers using generative AI models in drug discovery?", "gt_answer": "Researchers are using generative AI models to read a protein's amino acid sequence and accurately predict the structure of target proteins in seconds, rather than weeks or months.", "gt_context": "AI-Fueled Productivity: Generative AI Opens New Era of Efficiency Across Industries\n\nAuthor: Cliff Edwards\n\nA watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet.\n\nOn that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.\n\nBusinesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI \u2014 the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code \u2014 is moving at warp speed, transforming the way people work and play.\n\nBy employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.\n\nThe stakes are high. AI could contribute more than $15 trillion to the global economy by 2030, according to PwC . And the impact of AI adoption could be greater than the inventions of the internet, mobile broadband and the smartphone \u2014 combined.\n\nThe engine driving generative AI is accelerated computing . It uses GPUs, DPUs and networking along with CPUs to accelerate applications across science, analytics, engineering, as well as consumer and enterprise use cases.\n\nEarly adopters across industries \u2014 from drug discovery , financial services , retail and telecommunications to energy , higher education and the public sector \u2014 are combining accelerated computing with generative AI to transform business operations, service offerings and productivity.\n\nToday, radiologists use AI to detect abnormalities in medical images, doctors use it to scan electronic health records to uncover patient insights, and researchers use it to accelerate the discovery of novel drugs.\n\nTraditional drug discovery is a resource-intensive process that can require the synthesis of over 5,000 chemical compounds and yields an average success rate of just 10%. And it takes more than a decade for most new drug candidates to reach the market.\n\nResearchers are now using generative AI models to read a protein\u2019s amino acid sequence and accurately predict the structure of target proteins in seconds, rather than weeks or months.\n\nUsing NVIDIA BioNeMo models, Amgen, a global leader in biotechnology , has slashed the time it takes to customize models for molecule screening and optimization from three months to just a few weeks. This type of trainable foundation model enables scientists to create variants for research into specific diseases, allowing them to develop target treatments for rare conditions.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "What are some ways retailers are using AI?", "gt_answer": "Retailers are using AI to improve customer experiences, power dynamic pricing, create customer segmentation, design personalized recommendations, and perform visual search.", "gt_context": "Whether predicting protein structures or securely training algorithms on large real-world and synthetic datasets, generative AI and accelerated computing are opening new areas of research that can help mitigate the spread of disease, enable personalized medical treatments and boost patient survival rates.\n\nAccording to a recent NVIDIA survey , the top AI use cases in the financial services industry are customer services and deep analytics, where natural language processing and LLMs are used to better\n\nrespond to customer inquiries and uncover investment insights. Another common application is in recommender systems that power personalized banking experiences, marketing optimization and investment guidance.\n\nAdvanced AI applications have the potential to help the industry better prevent fraud and transform every aspect of banking, from portfolio planning and risk management to compliance and automation.\n\nEighty percent of business-relevant information is in an unstructured format \u2014 primarily text \u2014 which makes it a prime candidate for generative AI. Bloomberg News produces 5,000 stories a day related to the financial and investment community. These stories represent a vast trove of unstructured market data that can be used to make timely investment decisions.\n\nNVIDIA, Deutsche Bank , Bloomberg and others are creating LLMs trained on domain-specific and proprietary data to power finance applications.\n\nFinancial Transformers , or \u201cFinFormers,\u201d can learn context and understand the meaning of unstructured financial data. They can power Q&A; chatbots, summarize and translate financial texts, provide early warning signs of counterparty risk, quickly retrieve data and identify data-quality issues.\n\nThese generative AI tools rely on frameworks that can integrate proprietary data into model training and fine-tuning, integrate data curation to prevent bias and use guardrails to keep conversations finance-specific.\n\nExpect fintech startups and large international banks to expand their use of LLMs and generative AI to develop sophisticated virtual assistants to serve internal and external stakeholders, create hyper-personalized customer content, automate document summarization to reduce manual work, and analyze terabytes of public and private data to generate investment insights.\n\nWith 60% of all shopping journeys starting online and consumers more connected and knowledgeable than ever, AI has become a vital tool to help retailers match shifting expectations and differentiate from a rising tide of competition.\n\nRetailers are using AI to improve customer experiences, power dynamic pricing, create customer segmentation, design personalized recommendations and perform visual search.\n\nGenerative AI can support customers and employees at every step through the buyer journey.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "How can generative AI support telecommunications providers?", "gt_answer": "Generative AI can support telecommunications providers by optimizing network performance, improving customer support, detecting security intrusions, and enhancing maintenance operations.", "gt_context": "Generative AI can support customers and employees at every step through the buyer journey.\n\nWith AI models trained on specific brand and product data, they can generate robust product descriptions that improve search engine optimization rankings and help shoppers find the exact product they\u2019re looking for. For example, generative AI can use metatags containing product attributes to generate more comprehensive product descriptions that include various terms like \u201clow sugar\u201d or \u201cgluten free.\u201d\n\nAI virtual assistants can check enterprise resource planning systems and generate customer service messages to inform shoppers about which items are available and when orders will ship, and even assist customers with order change requests.\n\nFashable , a member of NVIDIA Inception \u2019s global network of technology startups, is using generative AI to create virtual clothing designs, eliminating the need for physical fabric during product development. With the models trained on both proprietary and market data, this reduces the environmental impact of fashion design and helps retailers design clothes according to current market trends and tastes.\n\nExpect retailers to use AI to capture and retain customer attention, deliver superior shopping experiences, and drive revenue by matching shoppers with the right products at the right time.\n\nIn an NVIDIA survey covering the telecommunications industry , 95% of respondents reported that they were engaged with AI, while two-thirds believed that AI would be important to their company\u2019s future success.\n\nWhether improving customer service, streamlining network operations and design, supporting field technicians or creating new monetization opportunities, generative AI has the potential to reinvent the telecom industry.\n\nTelcos can train diagnostic AI models with proprietary data on network equipment and services, performance, ticket issues, site surveys and more. These models can accelerate troubleshooting of technical performance issues, recommend network designs, check network configurations for compliance, predict equipment failures, and identify and respond to security threats.\n\nGenerative AI applications on handheld devices can support field technicians by scanning equipment and generating virtual tutorials to guide them through repairs. Virtual guides can then be enhanced with augmented reality, enabling technicians to analyze equipment in a 3D immersive environment or call on a remote expert for support.\n\nNew revenue opportunities will also open for telcos. With large edge infrastructure and access to vast datasets, telcos around the world are now offering generative AI as a service to enterprise and government customers.\n\nAs generative AI advances, expect telecommunications providers to use the technology to optimize network performance, improve customer support, detect security intrusions and enhance maintenance operations.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "How has AI been employed in education?", "gt_answer": "From intelligent tutoring systems to automated essay grading, AI has been employed in education for decades.", "gt_context": "In the energy industry , AI is powering predictive maintenance and asset optimization, smart grid management, renewable energy forecasting, grid security and more.\n\nTo meet growing data needs across aging infrastructure and new government compliance regulations, energy operators are looking to generative AI.\n\nIn the U.S., electric utility companies spend billions of dollars every year to inspect, maintain and upgrade power generation and transmission infrastructure.\n\nUntil recently, using vision AI to support inspection required algorithms to be trained on thousands of manually collected and tagged photos of grid assets, with training data constantly updated for new components. Now, generative AI can do the heavy lifting.\n\nWith a small set of image training data, algorithms can generate thousands of physically accurate images to train computer vision models that help field technicians identify grid equipment corrosion, breakage, obstructions and even detect wildfires . This type of proactive maintenance enhances grid reliability and resiliency by reducing downtime, while diminishing the need to dispatch teams to the field.\n\nGenerative AI can also reduce the need for manual research and analysis. According to McKinsey, employees spend up to 1.8 hours per day searching for information \u2014 nearly 20% of the work week. To increase productivity, energy companies can train LLMs on proprietary data, including meeting notes, SAP records, emails, field best practices and public data such as standard material data sheets.\n\nWith this type of knowledge repository connected to an AI chatbot, engineers and data scientists can get instant answers to highly technical questions. For example, a maintenance engineer troubleshooting pitch control issues on a turbine\u2019s hydraulic system could ask a bot: \u201cHow should I adjust the hydraulic pressure or flow to rectify pitch control issues on a model turbine from company X?\u201d A properly trained model would deliver specific instructions to the user, who wouldn\u2019t have to look through a bulky manual to find answers.\n\nWith AI applications for new system design, customer service and automation, expect generative AI to enhance safety and energy efficiency, as well as reduce operational expenses in the energy industry.\n\nFrom intelligent tutoring systems to automated essay grading, AI has been employed in education for decades. As universities use AI to improve teacher and student experiences, they\u2019re increasingly dedicating resources to build AI-focused research initiatives.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "How can generative AI be used in the public sector?", "gt_answer": "Generative AI can be used in the public sector to boost productivity by summarizing documents, providing relevant information through virtual assistants and chatbots, and assisting in content generation for publications, correspondence, reports, and announcements.", "gt_context": "For example, researchers at the University of Florida have access to one of the world\u2019s fastest supercomputers in academia. They\u2019ve used it to develop GatorTron \u2014 a natural language processing model that enables computers to read and interpret medical language in clinical notes that are stored in electronic health records. With a model that understands medical context, AI developers can create numerous medical applications, such as speech-to-text apps that support doctors with automated medical charting.\n\nIn Europe, an industry-university collaboration involving the Technical University of Munich is demonstrating that LLMs trained on genomics data can generalize across a plethora of genomic tasks, unlike previous approaches that required specialized models. The genomics LLM is expected to help scientists understand the dynamics of how DNA is translated into RNA and proteins, unlocking new clinical applications that will benefit drug discovery and health.\n\nTo conduct this type of groundbreaking research and attract the most motivated students and qualified academic professionals, higher education institutes should consider a whole-university approach to pool budget, plan AI initiatives, and distribute AI resources and benefits across disciplines.\n\nToday, the biggest opportunity for AI in the public sector is helping public servants to perform their jobs more efficiently and save resources.\n\nThe U.S. federal government employs over 2 million civilian employees \u2014 two-thirds of whom work in professional and administrative jobs.\n\nThese administrative roles often involve time-consuming manual tasks, including drafting, editing and summarizing documents, updating databases, recording expenditures for auditing and compliance, and responding to citizen inquiries.\n\nTo control costs and bring greater efficiency to routine job functions, government agencies can use generative AI.\n\nGenerative AI\u2019s ability to summarize documents has great potential to boost the productivity of policymakers and staffers, civil servants, procurement officers and contractors. Consider a 756-page report recently released by the National Security Commission on Artificial Intelligence. With reports and legislation often spanning hundreds of pages of dense academic or legal text, AI-powered summaries generated in seconds can quickly break down complex content into plain language, saving the human resources otherwise needed to complete the task.\n\nAI virtual assistants and chatbots powered by LLMs can instantly deliver relevant information to people online, taking the burden off of overstretched staff who work phone banks at agencies like the Treasury Department, IRS and DMV.\n\nWith simple text inputs, AI content generation can help public servants create and distribute publications, email correspondence, reports, press releases and public service announcements.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "What are some government organizations that can benefit from the analytical capabilities of AI?", "gt_answer": "Medicare, Medicaid, Veterans Affairs, USPS, and the State Department.", "gt_context": "The analytical capabilities of AI can also help process documents to speed the delivery of vital services provided by organizations like Medicare, Medicaid, Veterans Affairs, USPS and the State Department.\n\nGenerative AI could be a pivotal tool to help government bodies work within budget constraints, deliver government services more quickly and achieve positive public sentiment.\n\nAcross every field, organizations are transforming employee productivity, improving products and delivering higher-quality services with generative AI.\n\nTo put generative AI into practice, businesses need expansive amounts of data, deep AI expertise and sufficient compute power to deploy and maintain models quickly. Enterprises can fast-track adoption with the NeMo generative AI framework, part of NVIDIA AI Enterprise software, running on DGX Cloud . NVIDIA\u2019s pretrained foundation models offer a simplified approach to building and running customized generative AI solutions for unique business use cases.\n\nLearn more about powerful generative AI tools to help your business increase productivity, automate tasks, and unlock new opportunities for employees and customers.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/07/13/generative-ai-for-industries/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDcvMTMvZ2VuZXJhdGl2ZS1haS1mb3ItaW5kdXN0cmllcy8=.pdf"}, {"question": "What is the name of the software developed by NVIDIA for optimizing inference performance?", "gt_answer": "The software developed by NVIDIA for optimizing inference performance is called TensorRT-LLM.", "gt_context": "NVIDIA Grace Hopper Superchip Sweeps MLPerf Inference Benchmarks NVIDIA GH200, H100 and L4 GPUs and Jetson Orin modules show exceptional performance running AI in production from the cloud to the network\u2019s edge.\n\nAuthor: Dave Salvator\n\nIn its debut on the MLPerf industry benchmarks, the NVIDIA GH200 Grace Hopper Superchip ran all data center inference tests, extending the leading performance of NVIDIA H100 Tensor Core GPUs .\n\nThe overall results showed the exceptional performance and versatility of the NVIDIA AI platform from the cloud to the network\u2019s edge.\n\nSeparately, NVIDIA announced inference software that will give users leaps in performance, energy efficiency and total cost of ownership.\n\nThe GH200 links a Hopper GPU with a Grace CPU in one superchip. The combination provides more memory, bandwidth and the ability to automatically shift power between the CPU and GPU to optimize performance.\n\nSeparately, NVIDIA HGX H100 systems that pack eight H100 GPUs delivered the highest throughput on every MLPerf Inference test in this round.\n\nGrace Hopper Superchips and H100 GPUs led across all MLPerf\u2019s data center tests, including inference for computer vision, speech recognition and medical imaging, in addition to the more demanding use cases of recommendation systems and the large language models ( LLMs ) used in generative AI .\n\nOverall, the results continue NVIDIA\u2019s record of demonstrating performance leadership in AI training and inference in every round since the launch of the MLPerf benchmarks in 2018.\n\nThe latest MLPerf round included an updated test of recommendation systems, as well as the first inference benchmark on GPT-J, an LLM with six billion parameters, a rough measure of an AI model\u2019s size.\n\nTo cut through complex workloads of every size, NVIDIA developed TensorRT-LLM , generative AI software that optimizes inference. The open-source library \u2014 which was not ready in time for August submission to MLPerf \u2014 enables customers to more than double the inference performance of their already purchased H100 GPUs at no added cost.\n\nNVIDIA\u2019s internal tests show that using TensorRT-LLM on H100 GPUs provides up to an 8x performance speedup compared to prior generation GPUs running GPT-J 6B without the software.\n\nThe software got its start in NVIDIA\u2019s work accelerating and optimizing LLM inference with leading companies including Meta, AnyScale, Cohere, Deci, Grammarly, Mistral AI, MosaicML (now part of Databricks), OctoML, Tabnine and Together AI.\n\nMosaicML added features that it needs on top of TensorRT-LLM and integrated them into its existing serving stack. \u201cIt\u2019s been an absolute breeze,\u201d said Naveen Rao, vice president of engineering at Databricks.\n\n\u201cTensorRT-LLM is easy-to-use, feature-packed and efficient,\u201d Rao said. \u201cIt delivers state-of-the-art performance for LLM serving using NVIDIA GPUs and allows us to pass on the cost savings to our customers.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTEvZ3JhY2UtaG9wcGVyLWluZmVyZW5jZS1tbHBlcmYv.pdf"}, {"question": "What is the MLPerf benchmark?", "gt_answer": "The MLPerf benchmark is a transparent and objective benchmark used by users to make informed buying decisions. It covers a wide range of use cases and scenarios, providing dependable and flexible performance deployment.", "gt_context": "TensorRT-LLM is the latest example of continuous innovation on NVIDIA\u2019s full-stack AI platform. These ongoing software advances give users performance that grows over time at no extra cost and is versatile across diverse AI workloads.\n\nIn the latest MLPerf benchmarks, NVIDIA L4 GPUs ran the full range of workloads and delivered great performance across the board.\n\nFor example, L4 GPUs running in compact, 72W PCIe accelerators delivered up to 6x more performance than CPUs rated for nearly 5x higher power consumption.\n\nIn addition, L4 GPUs feature dedicated media engines that, in combination with CUDA software, provide up to 120x speedups for computer vision in NVIDIA\u2019s tests.\n\nL4 GPUs are available from Google Cloud and many system builders, serving customers in industries from consumer internet services to drug discovery.\n\nSeparately, NVIDIA applied a new model compression technology to demonstrate up to a 4.7x performance boost running the BERT LLM on an L4 GPU. The result was in MLPerf\u2019s so-called \u201copen division,\u201d a category for showcasing new capabilities.\n\nThe technique is expected to find use across all AI workloads. It can be especially valuable when running models on edge devices constrained by size and power consumption.\n\nIn another example of leadership in edge computing, the NVIDIA Jetson Orin system-on-module showed performance increases of up to 84% compared to the prior round in object detection, a computer vision use case common in edge AI and robotics scenarios.\n\nThe Jetson Orin advance came from software taking advantage of the latest version of the chip\u2019s cores, such as a programmable vision accelerator, an NVIDIA Ampere architecture GPU and a dedicated deep learning accelerator.\n\nThe MLPerf benchmarks are transparent and objective, so users can rely on their results to make informed buying decisions. They also cover a wide range of use cases and scenarios, so users know they can get performance that\u2019s both dependable and flexible to deploy.\n\nPartners submitting in this round included cloud service providers Microsoft Azure and Oracle Cloud Infrastructure and system manufacturers ASUS, Connect Tech, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.\n\nOverall, MLPerf is backed by more than 70 organizations, including Alibaba, Arm, Cisco, Google, Harvard University, Intel, Meta, Microsoft and the University of Toronto.\n\nRead a technical blog for more details on how NVIDIA achieved the latest results.\n\nAll the software used in NVIDIA\u2019s benchmarks is available from the MLPerf repository, so everyone can get the same world-class results. The optimizations are continuously folded into containers available on the NVIDIA NGC software hub for GPU applications.\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/09/11/grace-hopper-inference-mlperf/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDkvMTEvZ3JhY2UtaG9wcGVyLWluZmVyZW5jZS1tbHBlcmYv.pdf"}, {"question": "What motivated Janice.Journal to learn new art skills?", "gt_answer": "Janice.Journal was motivated to learn new art skills as a way to cope with her busy schedule.", "gt_context": "Advantage AI: Elevated Creative Workflows in NVIDIA Canvas, Blender, TikTok and CapCut 3D Artist Janice.Journal creates the \u2018Eighth Wonder of the World\u2019 with AI-powered creativity this week \u2018In the NVIDIA Studio.\u2019\n\nAuthor: Gerardo Delgado\n\nEditor\u2019s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks and demonstrates how NVIDIA Studio technology improves creative workflows. We\u2019re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources and how they dramatically accelerate content creation.\n\nAs beautiful and extraordinary as art forms can be, it can be easy to forget the simple joy and comforting escapism that content creation can provide for artists across creative fields.\n\nJanice K. Lee, a.k.a Janice.Journal \u2014 the subject of this week\u2019s In the NVIDIA Studio installment \u2014 is a TikTok sensation using AI to accelerate her creative process, find inspiration and automate repetitive tasks.\n\nAlso this week, NVIDIA Studio technology is powering some of the most popular mobile and desktop apps \u2014 driving creative workflows of both aspiring artists and creative professionals.\n\nWeek by week, AI becomes more ubiquitous within content creation.\n\nTake the popular social media app TikTok. All of its mobile app features, including AI Green Screen, are accelerated by GeForce RTX GPUs in the cloud. Other parts of TikTok creator workflows are also accelerated \u2014 Descript AI, a popular generative AI-powered video editing app, runs 50% faster on the latest NVIDIA L4 Tensor Core GPUs versus T4 Tensor Core GPUs .\n\nCapCut, the most widely used video editor by TikTok users, enables Simultaneous Scene Encoding, a functionality that sends independent groups of scenes to an NVIDIA Encoder (NVENC), contributing to shorter video export times without affecting image quality. This technology performs over 2x faster on NVIDIA GeForce RTX 4080 graphics cards versus on Apple\u2019s M2 Ultra.\n\nAdvanced users can move footage to their preferred desktop video editing app using native GPU-acceleration and RTX technology. This includes AV1 dual encoders ( NVIDIA GeForce RTX 4070 Ti graphics cards or higher required) for 40% better video quality for livestreamers, while video editors can slash export times nearly in half.\n\nJanice.Journal, a self-taught 3D creator, was motivated to learn new art skills as a way to cope with her busy schedule.\n\n\u201cI was going through a tough time during my junior year of college with classes and clubs,\u201d she said. \u201cWith no time to hang out with friends or decompress, my only source of comfort was learning something new every night for 20 minutes.\u201d\n\nHer passion for 3D creation quickly became evident. While Janice.Journal does consulting work during the day, she deep-dives into 3D creation at night, creating stunning scenes and tutorials to help other artists get started.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjkvamFuaWNlLWpvdXJuYWwtY2FudmFzLWJsZW5kZXItdGlrdG9rLWNhcGN1dC8=.pdf"}, {"question": "What software does Janice.Journal use to create her artwork?", "gt_answer": "Janice.Journal uses Blender software to create her artwork.", "gt_context": "One of her recent projects involved using the free NVIDIA Canvas beta app, which uses AI to interpret basic lines and shapes, translating them into realistic landscape images and textures.\n\nIn the above video, Janice.Journal aimed to create the \u201cEighth Wonder of the World,\u201d a giant arch inspired by the natural sandstone formations in Arches National Park in Utah.\n\n\u201cI wanted to create something that looked familiar enough where you could conceive to see it on \u2018National Geographic\u2019 but would still seem fantastical, awe-inspiring and simultaneously make the viewer question if it was real or fake,\u201d said Janice.Journal.\n\nUsing Canvas\u2019s 20 material brushes and nine style images, each with 10 variations, Janice.Journal got to work.\n\nShe said she \u201cgot a bit carried away\u201d on Canvas, resulting in an incredible masterpiece.\n\nJanice.Journal then had the option to export her painting into either a PNG or layered PSD file format to import into graphic design apps like Adobe Photoshop.\n\nCanvas is especially useful for concept artists looking to rapidly explore new ideas and for architects aiming to quickly draft backdrops and environments for buildings. With Canvas, Janice.Journal could rapidly paint a landscape without having to search for hours for the perfect stock photo, saving her valuable time to hone her 3D skills instead.\n\n\u201cI\u2019m still blown away trying it out for myself,\u201d said Janice.Journal. \u201cSeeing my simple drawings turn into fully HD images is wild \u2014 it really reminds me that the future is now.\u201d\n\nDownload NVIDIA Canvas, free for NVIDIA GeForce RTX graphics cards owners.\n\nJanice.Journal\u2019s portfolio features bright, vibrant visuals with a soft touch. Her 3D scene \u201cGameboy\u201d features two levels \u2014 no, not gaming levels, but living quarters built into a Gameboy, bringing to life every child\u2019s dream.\n\nMost artists start with a rough physical sketch to get concepts on paper, then move to Blender to block out basic shapes and sculpt models in finer detail.\n\nAI shines at this point in the workflow. Janice.Journal\u2019s GeForce RTX 3090 GPU- powered system unlocks Blender\u2019s Cycles RTX-accelerated OptiX ray tracing in the viewport, reducing noise and improving interactivity in the viewport for fluid movement with photorealistic visuals.\n\n\u201cSimply put, GPU acceleration and AI allow me to see renders in real time as they process modeling, lighting and the entire environment, enabling a preview as if I were to hit \u2018render\u2019 right away,\u201d said Janice.Journal. \u201cIt makes life 10 times easier for me.\u201d\n\nJanice.Journal has also been experimenting with AI-generated images as a way to brainstorm concepts and push creative boundaries \u2014 in her opinion, the most optimal use of AI.\n\nOnce everything has been modeled, Janice.Journal adds textures by playing around in Blender, applying clay shaders or displacement modifiers for \u201cbumpier\u201d textures. Then, she adds lighting and finishing touches to complete the ambience of the scene.\n\nCheck out Janice.Journal on TikTok .", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjkvamFuaWNlLWpvdXJuYWwtY2FudmFzLWJsZW5kZXItdGlrdG9rLWNhcGN1dC8=.pdf"}, {"question": "Where can I access tutorials on the NVIDIA Studio YouTube channel?", "gt_answer": "You can access tutorials on the NVIDIA Studio YouTube channel.", "gt_context": "Check out Janice.Journal on TikTok .\n\nFollow NVIDIA Studio on Instagram , Twitter and Facebook . Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/08/29/janice-journal-canvas-blender-tiktok-capcut/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDgvMjkvamFuaWNlLWpvdXJuYWwtY2FudmFzLWJsZW5kZXItdGlrdG9rLWNhcGN1dC8=.pdf"}, {"question": "What is ForgeOS?", "gt_answer": "ForgeOS is READY Robotics' 'no code' operating system designed to enable anyone to program robot hardware or automation devices.", "gt_context": "No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rollouts Startup\u2019s ForgeOS \u201cno code\u201d software, which features NVIDIA Isaac Sim, enables robot programming for non-coders.\n\nAuthor: Scott Martin\n\nRobotics hardware traditionally requires programmers to deploy it. READY Robotics wants to change that with its \u201cno code\u201d software aimed at people working in manufacturing who haven\u2019t got programming skills.\n\nThe Columbus, Ohio, startup is a spinout of robotics research from Johns Hopkins University. Kel Guerin was a PhD candidate there leading this research when he partnered with Benjamin Gibbs, who was at Johns Hopkins Technology Ventures, to land funding and pursue the company, now led by Gibbs as CEO.\n\n\u201cThere was this a-ha moment where we figured out that we could take these types of visual languages that are very easy to understand and use them for robotics,\u201d said Guerin, who\u2019s now chief innovation officer at the startup.\n\nREADY\u2019s \u201cno code\u201d ForgeOS operating system is designed to enable anyone to program any type of robot hardware or automation device. ForgeOS works seamlessly with plug-ins for most major robot hardware, and similar to other operating systems, like Android, it allows running third-party apps and plugins, providing a robust ecosystem of partners and developers working to make robots more capable, says Guerin.\n\nImplementing apps in robotics allows for new capabilities to be added to a robotic system in a few clicks, improving user experience and usability. Users can install their own apps, such as Task Canvas, which provides an intuitive building block programming interface similar to Scratch, a simple block-based visual language for kids developed at MIT Media Lab, which was influential in its design.\n\nTask Canvas allows users to show the actions of the robot, as well as all the other devices in an automation cell (such as grippers, programmable logic controllers, and machine tools) as blocks in a flow chart. The user can easily create powerful logic by tying these blocks together \u2014 without writing a single line of code. The interface offers nonprogrammers a more \u201cdrag-and-drop\u201d experience for programming and deploying robots, whether working directly on the factory floor with real robots on a tablet device or with access to simulation from Isaac Sim , powered by NVIDIA Omniverse .\n\nREADY is making robotics system design easier for nonprogrammers, helping to validate robots and systems for accelerated deployments.\n\nThe company is developing Omniverse Extensions \u2014 Omniverse kit applications based on Isaac Sim \u2014 and can deploy them on the cloud. It uses Omniverse Nucleus \u2014 the platform\u2019s database and collaboration engine \u2014 in the cloud as well.\n\nIsaac Sim is an application framework that enables simulation training for testing out robots in virtual manufacturing lines before deployment into the real world.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMjMvcmVhZHktcm9ib3RpY3Mtc2ltcGxpZmllcy1yb2JvdC1jb2RpbmctaXNhYWMtc2ltLW9tbml2ZXJzZS8=.pdf"}, {"question": "What program does NVIDIA provide to startups like READY?", "gt_answer": "NVIDIA Inception", "gt_context": "\u201cBigger companies are moving to a sim-first approach to automation because these systems cost a lot of money to install. They want to simulate them first to make sure it\u2019s worth the investment,\u201d said Guerin.\n\nThe startup charges users of its platform licensing per software seat and also offers support services to help roll out and develop systems.\n\nIt\u2019s a huge opportunity. Roughly 90 percent of the world\u2019s factories haven\u2019t yet embraced automation, which is a trillion-dollar market.\n\nREADY is a member of NVIDIA Inception , a free program that provides startups with technical training, go-to-market support and AI platform guidance.\n\nThe startup operates in an ecosystem of world-leading industrial automation providers, and these global partners are actively developing integrations with platforms like NVIDIA Omniverse and are investing in READY, said Guerin.\n\n\u201cRight now we are starting to work with large enterprise customers who want to automate but they can\u2019t find the expertise to do it,\u201d he said.\n\nStanley Black & Decker, a global supplier of tools, is relying on READY to automate machines, including CNC lathes and mills.\n\nRobotic automation had been hard to deploy in their factory until Stanley Black & Decker started using READY\u2019s ForgeOS with its Station setup, which makes it possible to deploy robots in a day.\n\nREADY is putting simulation capabilities into the hands of nonprogrammers, who can learn its Task Canvas interface for drag-and-drop programming of industrial robots in about an hour, according to the company.\n\nThe company also runs READY Academy, which offers a catalog of free training for manufacturing professionals to learn the skills to design, deploy, manage and troubleshoot robotic automation systems.\n\n\u201cFor potential customers interested in our technology, being able to try it out with a robot simulated in Omniverse before they get their hands on the real thing \u2014 that\u2019s something we\u2019re really excited about,\u201d said Guerin.\n\nLearn more about NVIDIA Isaac Sim , Jetson Orin , Omniverse Enterprise .\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/05/23/ready-robotics-simplifies-robot-coding-isaac-sim-omniverse/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDUvMjMvcmVhZHktcm9ib3RpY3Mtc2ltcGxpZmllcy1yb2JvdC1jb2RpbmctaXNhYWMtc2ltLW9tbml2ZXJzZS8=.pdf"}, {"question": "What games can members play on GeForce NOW this week?", "gt_answer": "This week, members can play games like Ravenswatch, Meet Your Maker, Road 96: Mile 0, TerraScape, Curse of the Sea Rats, Supplice, and Teardown on GeForce NOW.", "gt_context": "Gaming on the Go: GeForce NOW Gives Members More Ways to Play\n\nPlus, seven new games are joining the cloud this week.\n\nAuthor: GeForce NOW Community\n\nThis GFN Thursday explores the many ways GeForce NOW members can play their favorite PC games across the devices they know and love.\n\nPlus, seven new games join the GeForce NOW library this week .\n\nGeForce NOW is the ultimate platform for gamers who want to play across more devices than their PC. Thanks to the power of the cloud, game progress can be paused and picked up across any device, whether crashing on the couch with a cell phone or traveling with a tablet.\n\nStream GeForce NOW on mobile without a controller using enhanced mobile touch controls enabled for games like Genshin Impact , the popular free-to-play, open-world, action role-playing game from HoYoverse . Members get access to updates as they release, including the upcoming version 3.6, \u201c A Parade of Providence.\u201d It\u2019s available to stream next week, and brings a new event and characters sure to delight Genshin Impact fans.\n\nOr stream on the go with GeForce NOW-recommended gamepads \u2014 including the Backbone One and the Razer Kishi \u2014 which work with Android and iOS devices to further enhance the cloud gaming mobile experience with added comfort. These devices are perfect for extended gaming sessions of up to six hours for Priority members and up to eight hours for Ultimate members .\n\nAnd with the ability to stream from a high-powered RTX gaming rig in the cloud, GeForce NOW is the only way to play graphics-intensive games like Cyberpunk 2077 and Marvel\u2019s Guardians of the Galaxy on mobile at up to 120 frames per second with ultra-low latency for Ultimate members.\n\nSo whether on a tablet, TV, Mac, Chromebook or phone, GeForce NOW members are covered with high-performance cloud streaming. Level up to an Ultimate or Priority membership today to experience all the benefits of PC gaming on the go.\n\nAs always, members can experience new games immediately from the cloud this week, without worrying about download times or system specs. Titles including Ravenswatch, Meet Your Maker, Road 96: Mile 0, TerraScape and Curse of the Sea Rats are all gamepad compatible for gaming on the go.\n\nPlus, popular sci-fi MMORPG Tower of Fantasy brings a boatload of new content, including an all-new map and underwater request missions where players can explore everything from the upper levels of the Grand Sea Island to the deep waters of Dragon Breath Volcano.\n\nIt comes on top of the seven games available this week:\n\nRoad 96: Mile 0 (New release on Steam )\n\nMeet Your Maker (New release on Steam )\n\nTerraScape (New release on Steam )\n\nCurse of the Sea Rats (New release on Steam , April 6)\n\nRavenswatch (New release on Steam , April 6)\n\nSupplice (New release on Steam , April 6)\n\nTeardown ( Steam )\n\nFree members can now claim their Marvel\u2019s Midnight Suns reward. Check the rewards portal to claim Captain Marvel\u2019s Medieval Marvel suit by Saturday, May 6.", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMDYvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktYXByaWwtNi8=.pdf"}, {"question": "What device keeps you connected to the cloud?", "gt_answer": "Let us know in the comments below, on Twitter or Facebook.", "gt_context": "Finally, we\u2019ve got our question of the week to wrap up this GFN Thursday. Let us know what device keeps you connected to the cloud in the comments below, on Twitter or Facebook . time to pack for a last minute trip \u2013 which device are you bringing to play on the go? nn \u2014 nn NVIDIA GeForce NOW (@NVIDIAGFN) April 5, 2023\n\nOriginal URL: https://blogs.nvidia.com/blog/2023/04/06/geforce-now-thursday-april-6/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjMvMDQvMDYvZ2Vmb3JjZS1ub3ctdGh1cnNkYXktYXByaWwtNi8=.pdf"}, {"question": "What is Searidge Technologies?", "gt_answer": "Searidge Technologies is a company based in Ottawa, Canada that has created AI-powered software to help the aviation industry increase efficiency and enhance safety for airports.", "gt_context": "Searidge Technologies Offers a Safety Net for Airports NVIDIA Metropolis member\u2019s vision AI software boosts air-traffic control, helps automate tarmac traffic across the globe.\n\nAuthor: Angie Lee\n\nPlanes taxiing for long periods due to ground traffic \u2014 or circling the airport while awaiting clearance to land \u2014 don\u2019t just make travelers impatient. They burn fuel unnecessarily, harming the environment and adding to airlines\u2019 costs.\n\nSearidge Technologies, based in Ottawa, Canada, has created AI-powered software to help the aviation industry avoid such issues, increasing efficiency and enhancing safety for airports.\n\nIts Digital Tower and Apron solutions, powered by NVIDIA GPUs, use vision AI to manage traffic control for airports and alert users of safety concerns in real time. Searidge enables airports to handle 15-30% more aircraft per hour and reduce the number of tarmac incidents.\n\nThe company\u2019s tech is used across the world, including at London\u2019s Heathrow Airport, Fort Lauderdale-Hollywood International Airport in Florida and Dubai International Airport, to name a few.\n\nIn June, Searidge\u2019s Digital Apron and Tower Management System (DATMS) went operational at Hong Kong International Airport as part of an initial phase of the Airport Authority Hong Kong\u2019s large-scale expansion plan, which will bring machine learning to a new, integrated airport operations center.\n\nIn addition, Searidge provides the Civil Aviation Department of Hong Kong\u2019s air-traffic control systems with next-generation safety enhancements using its vision AI software.\n\nThe deployment in Hong Kong is the industry\u2019s largest digital platform for tower and apron management \u2014 and the first collaboration between an airport and an air-navigation service provider for a single digital platform.\n\nSearidge is a member of NVIDIA Metropolis , a partner program focused on bringing to market a new generation of vision AI applications that make the world\u2019s most important spaces and operations safer and more efficient.\n\nThe early 2000s saw massive growth and restructuring of airports \u2014 and with this came increased use of digital tools in the aviation industry.\n\nFounded in 2006, Searidge has become one of the first to bring machine learning to video processing in the aviation space, according to Pat Urbanek, the company\u2019s vice president of business development for Asia Pacific and the Middle East.\n\n\u201cVideo processing software for air-traffic control didn\u2019t exist before,\u201d Urbanek said. \u201cIt\u2019s taken a decade to become mainstream \u2014 but now, intelligent video and machine learning have been brought into airport operations, enabling new levels of automation in air-traffic control and airside operations to enhance safety and efficiency.\u201d", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMDQvc2VhcmlkZ2UtdGVjaG5vbG9naWVzLXNhZmV0eS1uZXQtZm9yLWFpcnBvcnRzLw==.pdf"}, {"question": "What is the name of the underlying machine learning platform used by DATMS?", "gt_answer": "The underlying machine learning platform used by DATMS is called Aimee.", "gt_context": "DATMS\u2019s underlying machine learning platform, called Aimee, enables traffic-lighting automation based on data from radars and 4K-resolution video cameras. Aimee is trained to detect aircraft and vehicles. And DATMS is programmed based on the complex roadway rules that determine how buses and other vehicles should operate on service roads across taxiways.\n\nAfter analyzing video data, the AI-enabled system activates or deactivates airports\u2019 traffic lights in real time, based on when it\u2019s appropriate for passenger buses and other vehicles to move. The status of\n\neach traffic light and additional details can also be visualized on end-user screens in airport traffic control rooms.\n\n\u201cWhat size is an aircraft? Does it have enough space to turn on the runway? Is it going too fast? All of this information and more is sent out over the Searidge Platform and displayed on screen based on user preference,\u201d said Marco Rueckert, vice president of technology at Searidge.\n\nThe same underlying technology is applied to provide enhanced safety alerts for aircraft departure and arrival. In real time, DATMS alerts air traffic controllers of safety-standard breaches \u2014 taking into consideration clearances for aircraft to enter a runway, takeoff or land.\n\nSearidge uses NVIDIA GPUs to optimize inference throughput across its deployments at airports around the globe. To train its AI models, Searidge uses an NVIDIA DGX A100 system .\n\n\u201cThe NVIDIA platform allowed us to really bring down the hardware footprint and costs from the customer\u2019s perspective,\u201d Rueckert said. \u201cIt provides the scalability factor, so we can easily add more cameras with increasing resolution, which ultimately helps us solve more problems and address more customer needs.\u201d\n\nThe company is also exploring the integration of voice data \u2014 based on communication between pilots and air-traffic controllers \u2014 within its machine learning platform to further enhance airport operations.\n\nSearidge\u2019s Digital Tower and Apron solutions can be customized for the unique challenges that come with varying airport layouts and traffic patterns.\n\n\u201cOf course, having aircraft land on time and letting passengers make their connections increases business and efficiency, but our technology has an environmental impact as well,\u201d Urbanek said. \u201cIt can prevent burning of huge amounts of fuel \u2014 in the air or at the gate \u2014 by providing enhanced efficiency and safety for taxiing, takeoff and landing.\u201d\n\nWatch the latest GTC keynote by NVIDIA founder and CEO Jensen Huang to discover how vision AI and other groundbreaking technologies are shaping the world:\n\nFeature video courtesy of Dubai Airports.\n\nOriginal URL: https://blogs.nvidia.com/blog/2022/10/04/searidge-technologies-safety-net-for-airports/", "document": "YmxvZ3MubnZpZGlhLmNvbS9ibG9nLzIwMjIvMTAvMDQvc2VhcmlkZ2UtdGVjaG5vbG9naWVzLXNhZmV0eS1uZXQtZm9yLWFpcnBvcnRzLw==.pdf"}, {"question": "What advancements does the GeForce RTX 4060 family of GPUs offer?", "gt_answer": "The GeForce RTX 4060 family of GPUs offers advancements such as DLSS 3 neural rendering, third-generation ray-tracing technologies, and improved performance at high frame rates.", "gt_context": "GeForce RTX 4060 Family Is Here: NVIDIA\u2019s Revolutionary Ada Lovelace Architecture Comes to Core Gamers Everywhere, Starting at $299\n\nSuperpowered by AI, Newest GPUs Provide 2x the Horsepower of Latest Gaming Consoles\n\nNVIDIA today announced the GeForce RTX\u2122 4060 family of GPUs, with two graphics cards that deliver all the advancements of the NVIDIA\u00ae Ada Lovelace architecture \u2014 including DLSS 3 neural rendering and third-generation ray- tracing technologies at high frame rates \u2014 starting at just $299.\n\nThe GeForce RTX 4060 Ti and GeForce RTX 4060 deliver unparalleled performance at fantastic value \u2014 bringing for the first time to the company\u2019s popular 60-class twice the horsepower of the latest gaming consoles, including ray tracing for premium image quality on top games.\n\n\u201cThe RTX 4060 family delivers PC gamers both great value and great performance at 1080p, whether they\u2019re building a gaming battle box or an AI-assisted creation station,\u201d said Matt Wuebbling, vice president of global GeForce marketing at NVIDIA. \u201cThese GPUs deliver an incredible upgrade, starting at just $299, putting Ada Lovelace and DLSS 3 in the hands of millions more worldwide.\u201d\n\nDLSS Brings AI-Accelerated Performance to 300+ Titles The GeForce RTX 4060 family provides access to the 300+ games and applications that now support DLSS, with eagerly anticipated titles The Lord of the Rings: Gollum and Diablo IV to include DLSS 3. A DLSS 3 plug-in for Unreal Engine 5 is also coming soon.\n\nDLSS 3 showcases the growing importance of AI in real-time games by creating new, high-quality frames for smoother gameplay. It massively increases performance in combination with DLSS Super Resolution, which uses AI to output higher- resolution frames from a lower-resolution input. Exceptional responsiveness is maintained through NVIDIA Reflex, which reduces input lag.\n\nThe Ultimate Graphics Cards for 1080p Gaming The GeForce RTX 4060 Ti is on average 2.6x faster than the RTX 2060 SUPER GPU and 1.7x faster than the GeForce RTX 3060 Ti GPU. For titles without frame generation, the RTX 4060 Ti is 1.6x faster than the RTX 2060 SUPER GPU.\n\nThe RTX 4060 Ti\u2019s memory subsystem features 32MB of L2 cache and 8GB or 16GB of ultra-high-speed GDDR6 memory. The RTX 4060 has 24MB of L2 cache with 8GB of GDDR6. The L2 cache reduces demands on the GPU\u2019s memory interface, ultimately improving performance and power efficiency.\n\nRay tracing performance has improved significantly from the previous generation, thanks to advancements like Shader Execution Reordering, cutting-edge Opacity Micromap and Displaced Micro-Mesh Engines. These innovations enable even the most demanding games to simultaneously implement multiple ray-tracing effects, and even full ray tracing, also known as path tracing, for unparalleled realism and immersion.", "document": "R2VGb3JjZSBSVFggNDA2MCBGYW1pbHkgNS8xOC8yMy5wZGY=.pdf"}, {"question": "What features does the NVIDIA Studio platform provide for content creators?", "gt_answer": "The NVIDIA Studio platform brings creators RTX acceleration, AI tools, over 110 creative apps, NVIDIA Studio Drivers, and AI-powered Studio software such as NVIDIA Omniverse, Canvas, and Broadcast.", "gt_context": "Perfect for Content Creators The GeForce RTX 4060 family of GPUs comes backed by the NVIDIA Studio platform, which brings creators RTX acceleration and AI tools at a more accessible starting price. Serving livestreamers, video editors, 3D artists and others, the platform supercharges over 110 creative apps, provides lasting stability with NVIDIA Studio Drivers and includes a powerful suite of AI-powered Studio software, such as NVIDIA Omniverse\u2122, Canvas and Broadcast.\n\nCreators of many disciplines can benefit from new fourth-generation Tensor Cores, which provide a significant performance increase for AI tools compared with the last generation. Accelerated AI features allow creators to automate tedious tasks and apply advanced effects with ease.\n\n3D modelers rendering high-resolution, ray-traced scenes can expect up to 45% faster performance than with the previous- generation GeForce RTX 3060 family. Adding AI-powered DLSS 3 \u2014 including within Omniverse, a hub for interconnecting existing 3D workflows to replace linear pipelines with live-sync creation and real-time collaboration \u2014 greatly accelerates the viewport in real-time 3D rendering applications, enabling a more fluid editing experience with full lighting, materials and physics.\n\nBroadcasters can use the eighth-generation NVIDIA video encoder, called NVENC, with best-in-class AV1 hardware encoding, and benefit from 40% better encoding efficiency. Livestreams will appear as if bitrate was increased by 40% \u2014 a big boost in image quality for popular broadcast apps like OBS Studio. Broadcasters can also benefit from NVIDIA Broadcast\n\nand its set of AI effects that improve microphones and webcams, turning rooms into home studios.\n\nVideo editors can benefit from a host of AI tools like auto-reframe, smart object selection and depth estimation, now available in top applications such as Adobe Premiere Pro and DaVinci Resolve, and export in AV1 for reduced file sizes.\n\nGeForce RTX Offers a Graphics Card for Every Kind of User With this latest launch, the GeForce RTX 40 Series now has an option for every resolution and every user.\n\nNVIDIA will celebrate the 4060 family\u2019s launch with 100 streamers, and give away 460 of the new cards to members of the gaming community as part of its \u201cSummer of RTX\u201d event. Learn more on the sweepstakes webpage.\n\nAvailability The GeForce RTX 4060 Ti 8GB will be available starting Wednesday, May 24, at $399. The GeForce RTX 4060 Ti 16GB version will be available in July, starting at $499. GeForce RTX 4060 will also be available in July, starting at $299.", "document": "R2VGb3JjZSBSVFggNDA2MCBGYW1pbHkgNS8xOC8yMy5wZGY=.pdf"}, {"question": "Which add-in card providers will offer custom boards for the entire RTX 4060 family?", "gt_answer": "ASUS, Colorful, Gainward, GALAX, GIGABYTE, INNO3D, KFA2, MSI, Palit, PNY, and ZOTAC.", "gt_context": "An NVIDIA Founders Edition design of the GeForce RTX 4060 Ti 8GB will be available directly from NVIDIA.com and select retailers. Custom boards for the entire RTX 4060 family, including stock-clocked and factory-overclocked models, will be available from top add-in card providers such as ASUS, Colorful, Gainward, GALAX, GIGABYTE, INNO3D, KFA2, MSI, Palit, PNY and ZOTAC, as well as from gaming system integrators and builders worldwide.\n\nAbout NVIDIA Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company\u2019s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the industrial metaverse. NVIDIA is now a full-stack computing company with data- center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.", "document": "R2VGb3JjZSBSVFggNDA2MCBGYW1pbHkgNS8xOC8yMy5wZGY=.pdf"}, {"question": "What are some of the technologies mentioned in the press release?", "gt_answer": "Some of the technologies mentioned in the press release are GeForce RTX 40 Series, Ada Lovelace architecture and GPUs, DLSS 3, DLSS Super Resolution, Reflex, RTX 2060 SUPER GPU, GeForce RTX 3060 Ti GPU, GeForce RTX 3060, NVIDIA Studio platform, NVIDIA Omniverse, NVIDIA Canvas, NVIDIA Broadcast, and NVENC.", "gt_context": "Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features and availability of our products, collaborations, services and technologies, including GeForce RTX 40 Series including GeForce RTX 4060 Ti and RTX 4060, Ada Lovelace architecture and GPUs, DLSS 3, DLSS Super Resolution, Reflex, RTX 2060 SUPER GPU, GeForce RTX 3060 Ti GPU, GeForce RTX 3060, Shader Execution Reordering, Opacity Micromap, Displaced Micro-Mesh Engine, NVIDIA Studio platform including Studio Drivers, NVIDIA Omniverse, NVIDIA Canvas, NVIDIA Broadcast, NVENC, including the eighth generation NVIDIA Encoder, and fourth-generation Tensor Cores; DLSS 3 showcasing the growing importance of AI in real-time games by creating new, high-quality frames for smoother gameplay; and celebrating the 4060 family launch as part of a \u201cSummer of RTX\u201d event with 100 streamers, and giving away 460 of the new cards to members of the gaming community are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.\n\n\u00a9 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, GeForce RTX and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.\n\nBenjamin Berraondo Director of Global PR, GeForce Products NVIDIA Corporation +44 7979 384482 bberraondo@nvidia.com", "document": "R2VGb3JjZSBSVFggNDA2MCBGYW1pbHkgNS8xOC8yMy5wZGY=.pdf"}]

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.