NVIDIA Partners With APEC Economies to Change Lives, Increase Opportunity, Improve Outcomes

NVIDIA Partners With APEC Economies to Change Lives, Increase Opportunity, Improve Outcomes

When patients in Vietnam enter a medical facility in distress, doctors use NVIDIA technology to get more accurate scans to diagnose their ailments. In Hong Kong, a different set of doctors leverage generative AI to discover new cures for patients.

Improving the health and well-being of citizens and strengthening economies and communities are key themes as world leaders soon gather in San Francisco for the 2023 Asia-Pacific Economic Cooperation (APEC) Summit.

When they meet to discuss bold solutions to improve the lives of their citizens and societies, NVIDIA’s AI and accelerated computing initiatives are a crucial enabler.

NVIDIA’s work to improve outcomes for everyday people while tackling future challenges builds on years of deep investment with APEC partners. With a strong presence in countries across the region, including a workforce of thousands and numerous collaborative projects in areas from farming to healthcare to education, NVIDIA is delivering new technologies and workforce training programs to enhance industrial development and advance generative AI research.

Beyond technological advancements, these efforts spur economic growth, create good-paying jobs and improve the health and well-being of people globally.

Research and National Compute Partnerships

NVIDIA has advanced AI research partnerships with several APEC economies. These accelerate scientific breakthroughs in AI and HPC to address national challenges, such as healthcare, skills development and creating more robust local AI ecosystems to protect and advance well-being, prosperity and security. For example:

  • Australia’s national science and research organization, CSIRO, has teamed with NVIDIA to advance Australia’s AI program across climate action, space exploration, quantum computing and AI education.
  • Singapore’s National Supercomputing Centre and Ministry of Education have partnered with NVIDIA to drive sovereign AI capabilities with a priority focus on sectors such as healthcare, climate science and digital twins.
  • Thailand was Southeast Asia’s first country to participate in NVIDIA’s AI Nations initiative, bringing together the Ministry of Education with a consortium of top universities to advance public-private collaborations in urban planning, public health and autonomous vehicles.
  • In Vietnam, NVIDIA is partnering with Viettel,  the nation’s largest employer, and Vietnam’s Academy for Science & Technology to upskill workforces, accelerate the introduction of AI services to industry and deploy next-generation 5G services.

Innovation Ecosystems

Startups are at the leading edge of AI innovation, and a robust startup ecosystem is vital to advancing technology within APEC economies.

NVIDIA Inception is a free program to help startups innovate faster. Through it, NVIDIA supports over 5,000 startups across APEC economies, and more than 15,000 globally, by providing cutting-edge technology, connections with venture capitalists and access to the latest technical resources.

In 2023, NVIDIA added nearly 1,000 APEC-area startups to the program. In addition to creating economic opportunities, Inception supports small- and medium-sized enterprises in developing novel solutions to some of society’s biggest challenges. Here’s what some of its members are doing:

  • In Malaysia, Tapway uses AI to reduce congestion and streamline traffic for more than 1 million daily travelers.
  • In New Zealand, Lynker uses geospatial analysis, deep learning and remote sensing for earth observation.  Lynker’s technology measures carbon sequestration on farms, detecting, monitoring and restoring wetlands and enabling more effective disaster relief.
  • In Thailand, AltoTech Global, an Inception partner, integrates AI software with Internet of Things devices to optimize energy consumption for hotels, buildings, factories and smart cities. AltoTech’s ultimate goal is contributing to the net-zero economy and helping customers achieve their net-zero targets.

Digital Upskilling and Tools for Growth

The NVIDIA Deep Learning Institute (DLI) provides AI training and digital upskilling programs that cultivate innovation and create economic opportunities.

DLI’s training and certification program helps individuals and organizations accelerate skills development and workforce transformation in AI, high performance computing and industrial digitalization.

Hands-on, self-paced and instructor-led courses are created and taught by NVIDIA experts, bringing real-world experience and deep technical know-how to developers and IT professionals.

Through this program, NVIDIA has trained more than 115,000 individuals in APEC economies, including more than 16,000 new trainees this year.

Separately, the NVIDIA Developer Program offers more than 2 million developers in APEC economies access to software development kits, application program interfaces, pretrained AI models and performance analysis tools to help developers create and innovate. Members receive free hands-on training, access to developer forums and early access to new products and services.

Creating a Better Future for All

As nations work together to address common challenges and improve the lives of their citizens, NVIDIA will continue to leverage its world-class technologies to help create a better world for all.

Read More

Dr Aengus Tran, co-founder of Annalise.ai and Harrison.ai on Using AI as a Spell Check for Health Checks

Dr Aengus Tran, co-founder of Annalise.ai and Harrison.ai on Using AI as a Spell Check for Health Checks

Clinician-led healthcare AI company Harrison.ai has built an AI system that effectively serves as a “spell checker” for radiologists — flagging critical findings to improve the speed and accuracy of radiology image analysis, reducing misdiagnoses.

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Harrison.ai CEO and cofounder Aengus Tran about the company’s mission to scale global healthcare capacity with autonomous AI systems.

Harrison.ai’s initial product, annalise.ai, is an AI tool that automates radiology image analysis to enable faster, more accurate diagnoses. It can produce 124-130 different possible diagnoses and flag key findings to aid radiologists in their final diagnosis. Currently, annalise.ai works for chest X-rays and brain CT scans, with more on the way.

While an AI designed for categorizing traffic lights, for example, doesn’t need perfection,  medical tools must be highly accurate — any oversight could be fatal. To overcome this challenge, annalise.ai was trained on millions of meticulously annotated images — some were annotated three to five times over before being used for training.

Harrison.ai is also developing Franklin.ai, a sibling AI tool aimed to accelerate and improve the accuracy of histopathology diagnosis — in which a clinician performs a biopsy and inspects the tissue for the presence of cancerous cells. Similarly to annalise.ai, Franklin.ai flags critical findings to assist pathologists in speeding and increasing the accuracy of diagnoses.

Ethical concerns about AI use are ever-rising, but for Tran, the concern is less about whether it’s ethical to use AI for medical diagnosis but “actually the converse: Is it ethical to not use AI for medical diagnosis,” especially if “humans using those AI systems simply pick up more misdiagnosis, pick up more cancer and conditions?”

Tran also talked about the future of AI systems and suggested that the focus is dual: first, focus on improving pree-xisting systems and then think of new cutting-edge solutions.

And for those looking to break into careers in AI and healthcare, Tran says that the “first step is to decide upfront what problems you’re willing to spend a huge part of your time solving first, before the AI part,” emphasizing that the “first thing is actually to fall in love with some problem.”

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Digital Artist Steven Tung Shows Off So-fish-ticated Style This Week ‘In the NVIDIA Studio’

Digital Artist Steven Tung Shows Off So-fish-ticated Style This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Taiwanese artist Steven Tung creates captivating 2D and 3D digital art that explores sci-fi, minimalism and realism and pushes artistic boundaries.

This week In the NVIDIA Studio, Tung shares the inspiration and creative workflow behind his whimsical animation, The Given Fish.

Professional-grade technology, which was once available only at select special effects studios, is becoming increasingly accessible.

“Visual production capabilities continue to skyrocket, generating a growing demand for better computer hardware among the general public,” Tung said. “The evolving synergy between art and technology can spark endless possibilities for creators.”

Tung uses an MSI MEG Trident X2 desktop, powered by GeForce RTX 4090 graphics, to accelerate his creative workflow.

The MSI MEG Trident X2 desktop, powered by GeForce RTX 4090 graphics.

“The enhanced speed and performance expedites various processes, such as updating material textures in Adobe Substance 3D Painter and rendering in Blender,” said Tung. “The necessary specifications and requirements align, enabling maximum creativity without limitations.”

Exquisite Visuals Made E-fish-ciently

Tung’s 3D animation, The Given Fish, may look simple at first glance — but it’s surprisingly complex.

“GeForce RTX GPUs are indispensable hardware for 3D rendering tasks. Faster speeds bring significant benefits in production efficiency and time saved.” — Steven Tung

In the creative world behind the animation, the stone fish depicted can be consumed by people. The concept is that once taken out of the aquarium, the stone fish transforms into a real, living one.

“I have a strong desire to have an aquarium at home, but it’s not practical,” said Tung. “The next best thing is to turn that emotion into art.”

Tung began by creating concept sketches in Adobe Photoshop, where he had access to over 30 GPU-accelerated features that could help modify and adjust his canvas and maximize his efficiency.

Concept art for “The Given Fish.”

Next, Tung jumped from 2D to 3D with ZBrush. He first built a basic model and then refined critical details with custom brushes — adding greater depth and dimension with authentic, hand-sculpted textures.

Advanced sculpting in ZBrush.

He then used the UV unwrapping feature in RizomUV to ensure that his models were properly unwrapped and ready for texture application.

UV unwrapping feature in RizomUV.

Tung imported the models into Adobe 3D Substance Painter, where he meticulously painted textures, blended materials and used the built-in library to achieve lifelike stone textures. RTX-accelerated light and ambient occlusion baking optimized his assets in seconds.

Applying textures in Adobe Substance 3D Painter.

To bring all the elements together, Tung imported the models and materials into Blender. He set up texture channels, assigned texture files and assembled the models so that they would be true to the compositions outlined in the initial sketch.

Achieving realistic stone textures in Adobe 3D Substance Painter.

Next, Tung used Blender Cycles to light and render the scene.

Composition edits in Blender.

Blender Cycles’ RTX-accelerated, AI-powered OptiX ray tracing enabled interactive, photorealistic movement in the viewport and sped up animation work — all powered by his GeForce RTX 4090 GPU-equipped system.

Animation work in Blender.

RTX accelerated OptiX ray tracing in Blender Cycles enabled the fastest final frame render.

Digital artist Steven Tung.

Check out Tung’s portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

‘Starship for the Mind’: University of Florida Opens Malachowsky Hall, an Epicenter for AI and Data Science

‘Starship for the Mind’: University of Florida Opens Malachowsky Hall, an Epicenter for AI and Data Science

Embodying the convergence of AI and academia, the University of Florida Friday inaugurated the Malachowsky Hall for Data Science & Information Technology.

The sleek, seven-story building is poised to play a pivotal role in UF’s ongoing efforts to harness the transformative power of AI, reaffirming its stature as one of the nation’s leading public universities.

Evoking Apple co-founder Steve Jobs’ iconic description of a personal computer, NVIDIA’s founder and CEO Jensen Huang described Malachowsky Hall — named for NVIDIA co-founder Chris Malachowsky — and the HiPerGator AI supercomputer it hosts as a “starship for knowledge discovery.”

“Steve Jobs called (the PC) ‘the bicycle of the mind,’ a device that propels our thoughts further and faster,” Huang said.

“What Chris Malachowsky has gifted this institution is nothing short of the ‘starship of the mind’ — a vehicle that promises to take our intellect to uncharted territories,” Huang said.

The inauguration of the 260,000-square-foot structure marks a milestone in the partnership between UF alum Malachowsky, NVIDIA and the state of Florida — a collaboration that has propelled UF to the forefront of AI innovation.

Malachowsky and NVIDIA both made major contributions toward its construction, bolstered by a $110 million investment from the state of Florida.

University of Florida President Ben Sasse and NVIDIA CEO Jensen Huang speak at the opening of Malachowsky Hall.
University of Florida President Ben Sasse and NVIDIA CEO Jensen Huang speak at the opening of Malachowsky Hall.

Following the opening, Huang and UF’s new president, Ben Sasse, met to discuss the impact of AI and data science across UF and beyond for students just starting their careers.

Sasse underscored the importance of adaptability in a rapidly changing world, telling the audience: “work in lots and lots of different organizations … because lifelong work in any one, not just firm, but any one industry is going to end in our lives. You’re ultimately going to have to figure out how to reinvent yourselves at 30, 35, 40 and 45.”

Huang offered students very different advice, recalling how he met his wife, Lori, who was in the audience, as an undergraduate. “Have a good pickup line … do you want to know the pickup line?” Huang asked, pausing a beat. “You want to see my homework?”

The spirit of Sasse and Huang’s adaptable approach to career and personal development is embodied in Malachowsky Hall, designed to bring together people from academia and industry, research and government.

Packed with innovative collaboration spaces and labs, the hall features a spacious 400-seat auditorium, dedicated high-performance computing study spaces and a rooftop terrace that unveils panoramic campus vistas.

Sustainability is woven into its design, highlighted by energy-efficient systems and rainwater harvesting facilities.

Malachowsky Hall will serve as a conduit to bring the on-campus advances in AI to Florida’s thriving economy, which continues to outpace the nation in jobs and GDP growth.

Together, NVIDIA founder and UF alumnus Chris Malachowsky and NVIDIA donated $50 million toward the University of Florida’s HiPerGator AI supercomputer.
Together, NVIDIA founder and UF alumnus Chris Malachowsky and NVIDIA donated $50 million toward the University of Florida’s HiPerGator AI supercomputer.

UF’s efforts to bring AI and academia together, catalyzed by support from Malachowsky and NVIDIA, go far beyond Malachowsky Hall.

In 2020, UF announced that Malachowsky and NVIDIA together donated $50 million toward HiPerGator, one of the most powerful AI supercomputers in the country.

With additional state support, UF recently added more than 110 AI faculty members to the 300 already engaged in AI teaching and research.

As a result, UF extended AI-focused courses, workshops and projects across the university, enabling its 55,000 students to delve into AI and its interdisciplinary applications.

Friday’s ribbon-cutting will open exciting new opportunities for the university, its students and the state of Florida to realize the potential of AI innovations across sectors.

Huang likened pursuing knowledge through AI to embarking on a “starship.” “You’ve got to go as far as you can,” he urged students.

For a deeper exploration of Malachowsky Hall and UF’s groundbreaking AI initiatives, visit UF’s website.

Read More

How AI-Based Cybersecurity Strengthens Business Resilience

How AI-Based Cybersecurity Strengthens Business Resilience

The world’s 5 billion internet users and nearly 54 billion devices generate 3.4 petabytes of data per second, according to IDC. As digitalization accelerates, enterprise IT teams are under greater pressure to identify and block incoming cyber threats to ensure business operations and services are not interrupted — and AI-based cybersecurity provides a reliable way to do so.

Few industries appear immune to cyber threats. This year alone, international hotel chains, financial institutions, Fortune 100 retailers, air traffic-control systems and the U.S. government have all reported threats and intrusions.

Whether from insider error, cybercriminals, hacktivists or other threats, risks in the cyber landscape can damage an enterprise’s reputation and bottom line. A breach can paralyze operations, jeopardize proprietary and customer data, result in regulatory fines and destroy customer trust.

Using AI and accelerated computing, businesses can reduce the time and operational expenses required to detect and block cyber threats while freeing up resources to focus on core business value operations and revenue-generating activities.

Here’s a look at how industries are applying AI techniques to safeguard data, enable faster threat detection and mitigate attacks to ensure the consistent delivery of service to customers and partners.

Public Sector: Protecting Physical Security, Energy Security and Citizen Services

AI-powered analytics and automation tools are helping government agencies provide citizens with instant access to information and services, make data-driven decisions, model climate change, manage natural disasters, and more. But public entities managing digital tools and infrastructure face a complex cyber risk environment that includes regulatory compliance requirements, public scrutiny, large interconnected networks and the need to protect sensitive data and high-value targets.

Adversary nation-states may initiate cyberattacks to disrupt networks, steal intellectual property or swipe classified government documents. Internal misuse of digital tools and infrastructure combined with sophisticated external espionage places public organizations at high risk of data breach. Espionage actors have also been known to recruit inside help, with 16% of public administration breaches showing evidence of collusion. To protect critical infrastructure, citizen data, public records and other sensitive information, federal organizations are turning to AI.

The U.S. Department of Energy’s (DOE) Office of Cybersecurity, Energy Security and Emergency Response (CESER) is tasked with strengthening the resilience of the country’s energy sector by addressing emerging threats and improving energy infrastructure security. The DOE-CESER has invested more than $240 million in cybersecurity research, development and demonstration projects since 2010.

In one project, the department developed a tool that uses AI to automate and optimize security vulnerability and patch management in energy delivery systems. Another project for artificial diversity and defense security uses software-defined networks to enhance the situational awareness of energy delivery systems, helping ensure uninterrupted flows of energy.

The Defense Advanced Research Projects Agency (DARPA), which is charged with researching and investing in breakthrough technologies for national security, is using machine learning and AI in several areas. The DARPA CASTLE program trains AI to defend against advanced, persistent cyber threats. As part of the effort, researchers intend to accelerate cybersecurity assessments with approaches that are automated, repeatable and measurable. The DARPA GARD program builds platforms, libraries, datasets and training materials to help developers build AI models that are resistant to deception and adversarial attacks.

To keep up with an evolving threat landscape and ensure physical security, energy security and data security, public organizations must continue integrating AI to achieve a dynamic, proactive and far-reaching cyber defense posture.

Financial Services: Securing Digital Transactions, Payments and Portfolios 

Banks, asset managers, insurers and other financial service organizations are using AI and machine learning to deliver superior performance in fraud detection, portfolio management, algorithmic trading and self-service banking.

With constant digital transactions, payments, loans and investment trades, financial service institutions manage some of the largest, most complex and most sensitive datasets of any industry. Behind only the healthcare industry, these organizations suffer the second highest cost of a data breach, at nearly $6 million per incident. This cost grows if regulators issue fines or if recovery includes legal fees and lawsuit settlements. Worse still, lost business may never be recovered if trust can’t be repaired.

Banks and financial institutions use AI to improve insider threat detection, detect phishing and ransomware, and keep sensitive information safe.

FinSec Innovation Lab, a joint venture by Mastercard and Enel X, is using AI to help its customers defend against ransomware. Prior to working with FinSec, one card-processing customer suffered a LockBit ransomware attack in which 200 company servers were infected in just 1.5 hours. The company was forced to shut down servers and suspend operations, resulting in an estimated $7 million in lost business.

FinSec replicated this attack in its lab but deployed the NVIDIA Morpheus cybersecurity framework, NVIDIA DOCA software framework for intrusion detection and NVIDIA BlueField DPU computing clusters. With this mix of AI and accelerated computing, FinSec was able to detect the ransomware attack in less than 12 seconds, quickly isolate virtual machines and recover 80% of the data on infected servers. This type of real-time response helps businesses avoid service downtime and lost business while maintaining customer trust.

With AI to help defend against cyberattacks, financial institutions can identify intrusions and anticipate future threats to keep financial records, accounts and transactions secure.

Retail: Keeping Sales Channels and Payment Credentials Safe

Retailers are using AI to power personalized product recommendations, dynamic pricing and customized marketing campaigns. Multichannel digital platforms have made in-store and online shopping more convenient: up to 48% of consumers save a card on file with a merchant, significantly boosting card-not-present transactions. While digitization has brought convenience, it has also made sensitive data more accessible to attackers.

Sitting on troves of digital payment credentials for millions of customers, retailers are a prime target for cybercriminals looking to take advantage of security gaps. According to a recent Data Breach Investigations Report from Verizon, 37% of confirmed data disclosures in the retail industry resulted in stolen payment card data.

Malware attacks, ransomware and distributed denial of service attacks are all on the rise, but phishing remains the favored vector for an initial attack. With a successful phishing intrusion, criminals can steal credentials, access systems and launch ransomware.

Best Buy manages a network of more than 1,000 stores across the U.S. and Canada. With multichannel digital sales across both countries, protecting consumer information and transactions is critical. To defend against phishing and other cyber threats, Best Buy began using customized machine learning and NVIDIA Morpheus to better secure their infrastructure and inform their security analysts.

After deploying this AI-based cyber defense, the retail giant improved the accuracy of phishing detection to 96% while reducing false-positive rates. With a proactive approach to cybersecurity, Best Buy is protecting its reputation as a tech expert focused on customer needs.

From complex supply chains to third-party vendors and multichannel point-of-sale networks, expect retailers to continue integrating AI to protect operations as well as critical proprietary and customer data.

Smart Cities and Spaces: Protecting Critical Infrastructure and Transit Networks

IoT devices and AI that analyze movement patterns, traffic and hazardous situations have great potential to improve the safety and efficiency of spaces and infrastructure. But as airports, shipping ports, transit networks and other smart spaces integrate IoT and use data, they also become more vulnerable to attack.

In the last couple of years, there have been distributed denial of service (DDoS) attacks on airports and air traffic control centers and ransomware attacks on seaports, city municipalities, police departments and more. Attacks can paralyze information systems, ground flights, disrupt the flow of cargo and traffic, and delay the delivery of goods to markets. Hostile attacks could have far more serious consequences, including physical harm or loss of life.

In connected spaces, AI-driven security can analyze vast amounts of data to predict threats, isolate attacks and provide rapid self-healing after an intrusion. AI algorithms trained on emails can halt threats in the inbox and block phishing attempts like those that delivered ransomware to seaports earlier this year. Machine learning can be trained to recognize DDoS attack patterns to prevent the type of incoming malicious traffic that brought down U.S. airport websites last year.

Organizations adopting smart space technology, such as the Port of Los Angeles, are making efforts to get ahead of the threat landscape. In 2014, the Port of LA established a cybersecurity operations center and hired a dedicated cybersecurity team. In 2021, the port followed up with a cyber resilience center to enhance early-warning detection for cyberattacks that have the potential to impact the flow of cargo.

The U.S. Federal Aviation Administration has developed an AI certification framework that assesses the trustworthiness of AI and ML applications. The FAA also implements a zero-trust cyber approach, enforces strict access control and runs continuous verification across its digital environment.

By bolstering cybersecurity and integrating AI, smart space and transport infrastructure administrators can offer secure access to physical spaces and digital networks to protect the uninterrupted movement of people and goods.

Telecommunications: Ensure Network Resilience and Block Incoming Threats

Telecommunications companies are leaning into AI to power predictive maintenance and maximum network uptime, network optimization, equipment troubleshooting, call-routing and self-service systems.

The industry is responsible for critical national infrastructure in every country, supports over 5 billion customer endpoints and is expected to constantly deliver above 99% reliability. As reliance on cloud, IoT and edge computing expands and 5G becomes the norm, immense digital surface areas must be protected from misuse and malicious attack.

Telcos can deploy AI to ensure the security and resilience of networks. AI can monitor IoT devices and edge networks to detect anomalies and intrusions, identify fake users, mitigate attacks and quarantine infected devices. AI can continuously assess the trustworthiness of devices, users and applications, thereby shortening the time needed to identify fraudsters.

Pretrained AI models can be deployed to protect 5G networks from threats such as malware, data exfiltration and DOS attacks.

Using deep learning and NVIDIA BlueField DPUs, Palo Alto Networks has built a next-generation firewall addressing 5G needs, maximizing cybersecurity performance while maintaining a small infrastructure footprint. The DPU powers accelerated intelligent network filtering to parse, classify and steer traffic to improve performance and isolate threats. With more efficient computing that deploys fewer servers, telcos can maximize return on investment for compute investments and minimize digital attack surface areas.

By putting AI to work, telcos can build secure, encrypted networks to ensure network availability and data security for both individual and enterprise customers.

Automotive: Insulate Vehicle Software From Outside Influence and Attack 

Modern cars rely on complex AI and ML software stacks running on in-vehicle computers to process data from cameras and other sensors. These vehicles are essentially giant, moving IoT devices — they perceive the environment, make decisions, advise drivers and even control the vehicle with autonomous driving features.

Like other connected devices, autonomous vehicles are susceptible to various types of cyberattacks. Bad actors can infiltrate and compromise AV software both on board and from third-party providers. Denial of service attacks can disrupt over-the-air software updates that vehicles rely on to operate safely. Unauthorized access to communications systems like onboard WiFi, Bluetooth or RFID can expose vehicle systems to the risk of remote manipulation and data theft. This can jeopardize geolocation and sensor data, operational data, driver and passenger data, all of which are crucial to functional safety and the driving experience.

AI-based cybersecurity can help monitor in-car and network activities in real time, allowing for rapid response to threats. AI can be deployed to secure and authenticate over-the-air updates to prevent tampering and ensure the authenticity of software updates. AI-driven encryption can protect data transmitted over WiFi, Bluetooth and RFID connections. AI can also probe vehicle systems for vulnerabilities and take remedial steps.

Ranging from AI-powered access control to unlock and start vehicles to detecting deviations in sensor performance and patching security vulnerabilities, AI will play a crucial role in the safe development and deployment of autonomous vehicles on our roads.

Keeping Operations Secure and Customers Happy With AI Cybersecurity 

By deploying AI to protect valuable data and digital operations, industries can focus their resources on innovating better products, improving customer experiences and creating new business value.

NVIDIA offers a number of tools and frameworks to help enterprises swiftly adjust to the evolving cyber risk environment. The NVIDIA Morpheus cybersecurity framework provides developers and software vendors with optimized, easy-to-use tools to build solutions that can proactively detect and mitigate threats while drastically reducing the cost of cyber defense operations. To help defend against phishing attempts, the NVIDIA spear phishing detection AI workflow uses NVIDIA Morpheus and synthetic training data created with the NVIDIA NeMo generative AI framework to flag and halt inbox threats.

The Morpheus SDK also enables digital fingerprinting to collect and analyze behavior characteristics for every user, service, account and machine across a network to identify atypical behavior and alert network operators. With the NVIDIA DOCA software framework, developers can create software-defined, DPU-accelerated services, while leveraging zero trust to build more secure applications.

AI-based cybersecurity empowers developers across industries to build solutions that can identify, capture and act on threats and anomalies to ensure business continuity and uninterrupted service, keeping operations safe and customers happy.

Learn how AI can help your organization achieve a proactive cybersecurity posture to protect customer and proprietary data to the highest standards.

Read More

How Are Foundation Models Used in Gaming?

How Are Foundation Models Used in Gaming?

AI technologies are having a massive impact across industries, including media and entertainment, automotive, customer service and more. For game developers, these advances are paving the way for creating more realistic and immersive in-game experiences.

From creating lifelike characters that convey emotions to transforming simple text into captivating imagery, foundation models are becoming essential in accelerating developer workflows while reducing overall costs. These powerful AI models have unlocked a realm of possibilities, empowering designers and game developers to build higher-quality gaming experiences.

What Are Foundation Models?

A foundation model is a neural network that’s trained on massive amounts of data — and then adapted to tackle a wide variety of tasks. They’re capable of enabling a range of general tasks, such as text, image and audio generation. Over the last year, the popularity and use of foundation models has rapidly increased, with hundreds now available.

For example, GPT-4 is a large multimodal model developed by OpenAI that can generate human-like text based on context and past conversations. Another, DALL-E 3, can create realistic images and artwork from a description written in natural language.

Powerful foundation models like NVIDIA NeMo and Edify model in NVIDIA Picasso make it easy for companies and developers to inject AI into their existing workflows. For example, using the NVIDIA NeMo framework, organizations can quickly train, customize and deploy generative AI models at scale. And using NVIDIA Picasso, teams can fine-tune pretrained Edify models with their own enterprise data to build custom products and services for generative AI images, videos, 3D assets, texture materials and 360 HDRi.

How Are Foundation Models Built?

Foundation models can be used as a base for AI systems that can perform multiple tasks. Organizations can easily and quickly use a large amount of unlabeled data to create their own foundation models.

The dataset should be as large and diverse as possible, as too little data or poor-quality data can lead to inaccuracies — sometimes called hallucinations — or cause finer details to go missing in generated outputs.

Next, the dataset must be prepared. This includes cleaning the data, removing errors and formatting it in such a way that the model can understand it. Bias is a pervasive issue when preparing a dataset, so it’s important to measure, reduce and tackle these inconsistencies and inaccuracies.

Training a foundational model can be time-consuming, especially given the size of the model and the amount of data required. Hardware like NVIDIA A100 or H100 Tensor Core GPUs, along with high-performance data systems like the NVIDIA DGX SuperPOD, can accelerate training. For example, ChatGPT-3 was trained on over 1,000 NVIDIA A100 GPUs over about 34 days.

The three requirements of a successful foundation model.

After ‌training, the foundation model is evaluated on quality, diversity and speed. There are several methods for evaluating performance, for example:

  • Tools and frameworks that quantify how well the model predicts a sample of text
  • Metrics that compare generated outputs with one or more references and measure the similarities between them
  • Human evaluators who assess the quality of the generated output on various criteria

Once the model passes the relevant tests and evaluations, it can then be deployed for production.

Exploring Foundation Models in Games

Pretrained foundation models can be leveraged by middleware, tools and game developers both during production and at run-time. To train a base model, resources and time are necessary — alongside a certain level of expertise. Currently, many developers within the gaming industry are exploring off-the-shelf models, but need custom solutions that fit their specific use cases. They need models that are trained on commercially safe data and optimized for real-time performance — without exorbitant costs of deployment. The difficulty of meeting these requirements has slowed adoption of foundation models.

However, innovation within the generative AI space is swift, and once major hurdles are addressed, developers of all sizes — from startups to AAA studios — will use foundation models to gain new efficiencies in game development and accelerate content creation. Additionally, these models can help create completely new gameplay experiences.

The top industry use cases are centered around intelligent agents and AI-powered animation and asset creation. For example, many creators today are exploring models for creating intelligent non-playable characters, or NPCs.

Custom LLMs fine-tuned with the lingo and lore of specific games can generate human-like text, understand context and respond to prompts in a coherent manner. They’re designed to learn patterns and language structures and understand game state changes — evolving and progressing alongside the player in the game.

As NPCs become increasingly dynamic,real-time animation and audio that sync with their responses will be needed. Developers are using NVIDIA Riva to create expressive character voices using speech and translation AI. And designers are tapping NVIDIA Audio2Face for AI-powered facial animations.

Foundation models are also being used for asset and animation generation. Asset creation during the pre-production and production phases of game development can be time-consuming, tedious and expensive.

With state-of-the-art diffusion models, developers can iterate more quickly, freeing up time to spend on the most important aspects of the content pipeline, such as developing higher-quality assets and iterating. The ability to fine-tune these models from a studio’s own repository of data ensures the outputs generated are similar to the art styles and designs of their previous games.

Foundation models are readily available, and the gaming industry is only in the beginning phases of understanding their full capabilities. Various solutions have been built for real-time experiences, but the use cases are limited. Fortunately, developers can easily access models and microservices through cloud APIs today and explore how AI can affect their games and scale their solutions to more customers and devices than ever before.

The Future of Foundation Models in Gaming

Foundation models are poised to help developers realize the future of gaming. Diffusion models and large language models are becoming much more lightweight as developers look to run them natively on a range of hardware power profiles, including PCs, consoles and mobile devices.

The accuracy and quality of these models will only continue to improve as developers look to generate high-quality assets that need little to no touching up before being dropped into an AAA gaming experience.

Foundation models will also be used in areas that have been challenging for developers to overcome with traditional technology. For example, autonomous agents can help analyze and detect world space during game development, which will accelerate processes for quality assurance.

The rise of multimodal foundation models, which can ingest a mix of text, image, audio and other inputs simultaneously, will further enhance player interactions with intelligent NPCs and other game systems. Also, developers can use additional input types to improve creativity and enhance the quality of generated assets during production.

Multimodal models also show great promise in improving the animation of real-time characters, one of the most time-intensive and expensive processes of game development. They may be able to help make characters’ locomotion identical to real-life actors, infuse style and feel from a range of inputs, and ease the rigging process.

Learn More About Foundation Models in Gaming

From enhancing dialogue and generating 3D content to creating interactive gameplay, foundation models have opened up new opportunities for developers to forge the future of gaming experiences.

Learn more about foundation models and other technologies powering game development workflows.

Read More

GeForce NOW-vember Brings Over 50 New Games to Stream In the Cloud

GeForce NOW-vember Brings Over 50 New Games to Stream In the Cloud

Gear up with gratitude for more gaming time. GeForce NOW brings members a cornucopia of 15 newly supported games to the cloud this week. That’s just the start — there are a total of 54 titles coming in the month of November.

Members can also join thousands of esports fans in the cloud with the addition of Virtex Stadium to the GeForce NOW library for a ‘League of Legends’ world championship viewing party.

Esports Like Never Before

Virtex Stadium on GeForce NOW
Watch “League of Legends” esports like never before.

This year’s League of Legends world championship finals are coming to Virtex Stadium — an online virtual stadium now streaming on NVIDIA’s cloud gaming infrastructure.

In Virtex Stadium, esports fans can hang out with friends from across the world, create and personalize avatars, and watch live competitions together — all from the comfort of their homes.

Starting on Thursday, Nov. 2, watch League of Legends Worlds 2023 in Virtex Stadium with thousands of others. Use props and emotes to cheer players on together via chat.

GeForce NOW members and League of Legends fans can drop into Virtex Stadium without needing to create a new login. Within the Virtex Stadium app, members can choose to create a “Ready Player Me” avatar and account to save their digital characters for future visits. Members can even link their Twitch accounts to chat and emote with other viewers while in the stadium.

Catch all the action on the following dates:

  • Quarterfinal 1: Nov. 2 at 9 a.m. CET
  • Quarterfinal 2: Nov. 3 at 9 a.m. CET
  • Quarterfinal 3: Nov. 4 at 9 a.m. CET
  • Quarterfinal 4: Nov. 5 at 9 a.m. CET
  • Semifinal 1: Nov. 11 at 9 a.m. CET
  • Semifinal 2: Nov. 12 at 9 a.m. CET
  • Final: Nov. 19 at 9 a.m. CET

Time to Shine

Apex Legends: Ignite on GeForce NOW
SHINY!

Electronic Arts’ and Respawn Entertainment’s Apex Legends: Ignite, the newest season for the battle royale first-person shooter, is now available to stream from the cloud. Light the way with Conduit, the new support Legend with shield-based abilities. Plus, check out a faster and deadlier Storm Point map, a new Battle Pass with rewards, and more to help ignite Apex Legends players’ ways to victory.

Members can start their adventures now, along with 15 other games newly supported in the cloud this week:

  • Headbangers: Rhythm Royale (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Jusant (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • RoboCop: Rogue City (New release on Steam, Nov. 2)
  • The Talos Principle 2 (New release on Steam, Nov. 2)
  • StrangerZ (New release on Steam, Nov. 3)
  • Curse of the Dead Gods (Xbox, available on Microsoft Store)
  • Daymare 1994: Sandcastle (Steam)
  • ENDLESS Dungeon (Steam)
  • F1 Manager 2023 (Xbox, available on PC Game Pass)
  • Heretic’s Fork (Steam)
  • HOT WHEELS UNLEASHED 2 – Turbocharged (Epic Games Store)
  • Kingdoms Reborn (Steam)
  • Q.U.B.E. 2 (Epic Games Store)
  • Soulstice (Epic Games Store)
  • Virtex Stadium (Free)

Then check out the plentiful games for the rest of November:

  • The Invincible (New release on Steam, Nov 6.)
  • Roboquest (New release on Steam, Nov. 7)
  • Stronghold: Definitive Edition (New release on Steam, Nov. 7)
  • Dungeons 4 (New release on Steam, Xbox and available on PC Game Pass, Nov. 9)
  • Space Trash Scavenger (New release on Steam, Nov. 9)
  • Spirittea (New release on Steam, Xbox and available on PC Game Pass, Nov 13)
  • Naheulbeuk’s Dungeon Master (New release on Steam, Nov 15)
  • Last Train Home (New release on Steam, Nov. 28)
  • Gangs of Sherwood  (New release on Steam, Nov. 30)
  • Airport CEO (Steam)
  • Arcana of Paradise —The Tower (Steam)
  • Blazing Sails: Pirate Battle Royale (Epic Games Store)
  • Breathedge (Xbox, available on Microsoft Store)
  • Bridge Constructor: The Walking Dead (Xbox, available on Microsoft Store)
  • Bus Simulator 21 (Xbox, available on Microsoft Store)
  • Farming Simulator 19 (Xbox, available on Microsoft Store)
  • GoNNER (Xbox, available on Microsoft Store)
  • GoNNER2 (Xbox, available on Microsoft Store)
  • Hearts of Iron IV (Xbox, available on Microsoft Store)
  • Hexarchy (Steam)
  • I Am Future (Epic Games Store)
  • Imagine Earth (Xbox, available on Microsoft Store)
  • Jurassic World Evolution 2 (Xbox, available on PC Game Pass)
  • Land of the Vikings (Steam)
  • Onimusha: Warlords (Steam)
  • Overcooked! 2 (Xbox, available on Microsoft Store)
  • Saints Row IV (Xbox, available on Microsoft Store)
  • Settlement Survival (Steam)
  • SHENZHEN I/O (Xbox, available on Microsoft Store)
  • SOULVARS (Steam)
  • The Surge 2 (Xbox, available on Microsoft Store)
  • Thymesia (Xbox, available on Microsoft Store)
  • Trailmakers (Xbox, available on PC Game Pass)
  • Tropico 6 (Xbox, available on Microsoft Store)
  • Wartales (Xbox, available on PC Game Pass)
  • The Wonderful One: After School Hero (Steam)
  • Warhammer Age of Sigmar: Realms of Ruin (Steam)
  • West of Dead (Xbox, available on Microsoft Store)
  • Wolfenstein: The New Order (Xbox, available on PC Game Pass)
  • Wolfenstein: The Old Blood (Steam, Epic Games Store, Xbox and available on PC Game Pass)

Outstanding October

On top of the 60 games announced in October, an additional 48 joined the cloud last month, including several from this week’s additions, Curse of the Dead Gods, ENDLESS Dungeon, Farming Simulator 19, Hearts of Iron IV, Kingdoms Reborn, RoboCop: Rogue City, StrangerZ, The Talos Principle 2, Thymesia, Tropico 6 and Virtex Stadium:

  • AirportSim (New release on Steam, Oct. 19)
  • Battle Chasers: Nightwar (Xbox, available on Microsoft Store)
  • Black Skylands (Xbox, available on Microsoft Store)
  • Blair Witch (Xbox, available on Microsoft Store)
  • Call of the Sea (Xbox, available on Microsoft Store
  • Chicory: A Colorful Tale (Xbox and available on PC Game Pass)
  • Cricket 22 (Xbox and available on PC Game Pass)
  • Dead by Daylight (Xbox and available on PC Game Pass)
  • Deceive Inc. (Epic Games Store)
  • Dishonored (Steam)
  • Dishonored: Death of the Outsider (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Dishonored Definitive Edition (Epic Games Store, Xbox and available on PC Game Pass)
  • Dishonored 2 (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Dune: Spice Wars (Xbox and available on PC Game Pass)
  • Eternal Threads (New release on Epic Games Store, Oct. 19)
  • Everspace 2 (Xbox and available on PC Game Pass)
  • EXAPUNKS (Xbox and available on PC Game Pass)
  • From Space (New release on Xbox, available on PC Game Pass, Oct. 12)
  • Ghostrunner 2 (New release on Steam, Oct. 26)
  • Ghostwire: Tokyo (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Golf With Your Friends (Xbox, available on PC Game Pass)
  • Gungrave G.O.R.E (Xbox and available on PC Game Pass)
  • The Gunk (Xbox and available on PC Game Pass)
  • Hotel: A Resort Simulator (New release on Steam, Oct. 12)
  • Kill It With Fire (Xbox and available on PC Game Pass)
  • Railway Empire 2 (Xbox and available on PC Game Pass)
  • Rubber Bandits (Xbox, available on PC Game Pass)Saints Row IV (Xbox, available on Microsoft Store)
  • Saltsea Chronicles (New release on Steam, Oct. 12)
  • Soulstice (Epic Games Store)
  • State of Decay 2: Juggernaut Edition (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Supraland Six Inches Under (Epic Games Store)
  • Techtonica (Xbox and available on PC Game Pass)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Xbox and available on PC Game Pass)
  • Torchlight III (Xbox and available on PC Game Pass)
  • Totally Accurate Battle Simulator (Xbox and available on PC Game Pass)
  • Tribe: Primitive Builder (New release on Steam, Oct. 12)
  • Trine 5: A Clockwork Conspiracy (Epic Games Store)

War Hospital didn’t make it in October due to a delay of its launch date. StalCraft and VEILED EXPERTS also didn’t make it in October due to technical issues. Stay tuned to GFN Thursday for more updates.

What are you looking forward to streaming this month? Let us know on Twitter or in the comments below.

Read More

Turing’s Mill: AI Supercomputer Revs UK’s Economic Engine

Turing’s Mill: AI Supercomputer Revs UK’s Economic Engine

The home of the first industrial revolution just made a massive investment in the next one.

The U.K. government has announced it will spend £225 million ($273 million) to build one of the world’s fastest AI supercomputers.

Called Isambard-AI, it’s the latest in a series of systems named after a legendary 19th century British engineer and hosted by the University of Bristol. When fully installed next year, it will pack 5,448 NVIDIA GH200 Grace Hopper Superchips to deliver a whopping 21 exaflops of AI performance for researchers across the country and beyond.

The announcement was made at the AI Safety Summit, a gathering of over 100 global government and technology leaders, held in Bletchley Park, the site of the world’s first digital programmable computer, which reflected the work of innovators like Alan Turing, considered the father of AI.

AI “will bring a transformation as far-reaching as the industrial revolution, the coming of electricity or the birth of the internet,” said British Prime Minister Rishi Sunak in a speech last week about the event, designed to catalyze international collaboration.

Propelling the Modern Economy

Like one of Isambard Brunel’s creations — the first propeller-driven, ocean-going iron ship — the AI technology running on his namesake is already driving countries forward.

AI contributes more than £3.7 billion to the U.K. economy and employs more than 50,000 people, said Michelle Donelan, the nation’s Science, Innovation and Technology Secretary, in an earlier announcement about the system.

The investment in the so-called AI Research Resource in Bristol “will catalyze scientific discovery and keep the U.K. at the forefront of AI development,” she said.

Like AI itself, the system will be used across a wide range of organizations tapping the potential of machine learning to advance robotics, data analytics, drug discovery, climate research and more.

“Isambard-AI represents a huge leap forward for AI computational power in the U.K.,” said Simon McIntosh-Smith, a Bristol professor and director of the Isambard National Research Facility. “Today, Isambard-AI would rank within the top 10 fastest supercomputers in the world and, when in operation later in 2024, it will be one of the most powerful AI systems for open science anywhere.”

The Next Manufacturing Revolution

Like the industrial revolution, AI promises advances in manufacturing. That’s one reason why Isambard-AI will be based at the National Composites Centre (NCC, pictured above) in the Bristol and Bath Science Park, one of the country’s seven manufacturing research centers.

The U.K.’s Frontier AI Taskforce, a research group leading a global effort on how frontier AI can be safely developed, will also be a major user of the system.

Hewlett Packard Enterprise, which is building Isambard-AI, is also collaborating with the University of Bristol on energy-efficiency plans that support net-zero carbon targets mandated by the British government.

Energy-Efficient HPC

A second system coming next year to the NCC will show Arm’s energy efficiency for non-accelerated high performance computing workloads.

Isambard-3 will deliver an estimated 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world’s three greenest non-accelerated supercomputers. That’s because the system — part of a research alliance among universities of Bath, Bristol, Cardiff and Exeter — will sport 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research.

“Isambard-3’s application performance efficiency of up to 6x its predecessor, which rivals many of the 50 fastest TOP500 systems, will provide scientists with a revolutionary new supercomputing platform to advance groundbreaking research,” said Bristol’s McIntosh-Smith, when the system was announced in March.

Read More

Unlocking the Power of Language: NVIDIA’s Annamalai Chockalingam on the Rise of LLMs

Unlocking the Power of Language: NVIDIA’s Annamalai Chockalingam on the Rise of LLMs

Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.”

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential.

LLMs are a “subset of the larger generative AI movement” that deals with language. They’re deep learning algorithms that can recognize, summarize, translate, predict and generate language.

AI has been around for a while, but according to Chockalingam, three key factors enabled LLMs.

One is the availability of large-scale data sets to train models with. As more people used the internet, more data became available for use. The second is the development of computer infrastructure, which has become advanced enough to handle “mountains of data” in a “reasonable timeframe.” And the third is advancements in AI algorithms, allowing for non-sequential or parallel processing of large data pools.

LLMs can do five things with language: generate, summarize, translate, instruct or chat. With a combination of “these modalities and actions, you can build applications” to solve any problem, Chockalingam said.

Enterprises are tapping LLMs to “drive innovation,” “develop new customer experiences,” and gain a “competitive advantage.” They’re also exploring what safe deployment of those models looks like, aiming to achieve responsible development, trustworthiness and repeatability.

New techniques like retrieval augmented generation (RAG) could boost LLM development. RAG involves feeding models with up-to-date “data sources or third-party APIs” to achieve “more appropriate responses” — granting them current context so that they can “generate better” answers.

Chockalingam encourages those interested in LLMs to “get your hands dirty and get started” — whether that means using popular applications like ChatGPT or playing with pretrained models in the NVIDIA NGC catalog.

NVIDIA offers a full-stack computing platform for developers and enterprises experimenting with LLMs, with an ecosystem of over 4 million developers and 1,600 generative AI organizations. To learn more, register for LLM Developer Day on Nov. 17 to hear from NVIDIA experts about how best to develop applications.

SUBHEAD: Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

In the world’s largest solar race car event of the year, the University of New South Wales Sunswift Racing team is having its day in the sun.

The World Solar Challenge, which first began some 35 years ago, attracts academic participants from across the globe. This year’s event drew nearly 100 competitors.

The race runs nearly 1,900 miles over the course of about four days and pits challengers in a battle not for speed but for greatest energy efficiency.

UNSW Sydney won the energy efficiency competition and crossed the finish line first, taking the Cruiser Cup with its Sunswift 7 vehicle, which utilizes NVIDIA Jetson Xavier NX for energy optimization. It was also the only competitor to race with 4 people on board and a remote mission control team.

“It’s a completely different proposition to say we can use the least amount of energy and arrive in Adelaide before anybody else, but crossing the line first is just about bragging rights,” said Richard Hopkins, project manager at Sunswift and a UNSW professor. Hopkins previously managed Formula 1 race teams in the U.K.

Race organizers bill the event, which cuts across the entire Australian continent on public roads — from Darwin in the north to Adelaide in the south — as the “world’s greatest innovation and engineering challenge contributing to a more sustainable mobility future.” It’s also become a launchpad for students pursuing career paths in the electric vehicle industry.

Like many of the competitors, UNSW is coming back after a three-year hiatus from the race due to the COVID-19 pandemic, making this year’s competition highly anticipated.

“Every single team member needs to understand what they’re doing and what their role is on the team and perform at the very best during those five-and-a-half days,” said Hopkins. “It is exhausting.”

All In on Energy Efficiency  

The race allows participants to start with a fully charged battery and to charge when the vehicles stop for the night at two locations. The remaining energy used, some 90%, comes from the sun and the vehicles’ solar panels.

UNSW’s seventh-generation Sunswift 7 runs algorithms to optimize for energy efficiency, essentially shutting down all nonessential computing to maximize battery life.

The solar electric vehicle relies on NVIDIA Jetson AI to give it an edge across its roughly 100 automotive monitoring and power management systems.

It can also factor in whether it should drive faster or slower based on weather forecasts. For instance, the car will urge the driver to go faster if it’s going to rain later in the day when conditions would force the car to slow down.

The Sunswift 7 vehicle was designed to mostly drive in a straight line from Darwin to Adelaide, and the object is to use the least amount of power outside of that mission, said Hopkins.

“Sunswift 7 late last year was featured in the Guinness Book of World Records for being the fastest electric vehicle for over 1,000 kilometers on a single charge of battery,” he said.

Jetson-Based Racers for Learning

The UNSW team created nearly 60 design iterations to improve on the aerodynamics of the vehicle. They used computational fluid dynamics modeling and ran simulations to analyze each version.

“We didn’t ever put the car through a physical wind tunnel,” said Hopkins.

The technical team has been working on a model to determine what speed the vehicle should be driven at for maximum energy conservation. “They’re working on taking in as many parameters as you can, given it’s really hard to get good driving data,” said Josh Bramley, technology manager at Sunswift Racing.

Sunswift 7 is running on the Robot Operating System (ROS) suite of software and relies on its NVIDIA Jetson module to process all the input from the sensors for analytics, which can be monitored by the remote pit crew back on campus at UNSW.

Jetson is used for all the control systems on the car, so everything from the accelerator pedal, wheel sensors, solar current sensors and more are processed on it for data to analyze for ways AI might help, said Bramley. The next version of the vehicle is expected to pack more AI, he added.

“A lot of the AI and computer vision will be coming for Sunswift 8 in the next solar challenge,” said Bramley.

More than 100 students are getting course credit for the Sunswift Racing team work, and many are interested in pursuing careers in electric vehicles, said Hopkins.

Past World Solar Challenge contestants have gone on to work at Tesla, SpaceX and Zipline.

Talk about a bright future.

Learn more about the NVIDIA Jetson platform for edge AI and robotics.

Read More