Harvesting AI: Startup’s Weed Recognition for Herbicides Grows Yield for Farmers

When French classmates Guillaume Jourdain, Hugo Serrat and Jules Beguerie were looking at applying AI to agriculture in 2014 to form a startup, it was hardly a sure bet.

It was early days for such AI applications, and people said it couldn’t be done. But farmers they spoke with wanted it.

So they rigged together a crude demo to show that a GeForce GPU could run a weed-identification network with a camera. And next thing you know, they had their first customer-investor.

In 2016, the former dorm-mates at École Nationale Supérieure d’Arts et Métiers, in Paris, founded Bilberry. The company today develops weed recognition powered by the NVIDIA Jetson edge AI platform for precision application of herbicides at corn and wheat farms, offering as much as a 92 percent reduction in herbicide usage.

Driven by advances in AI and pressures on farmers to reduce their use of herbicides, weed recognition is starting to see its day in the sun. A bumper crop of AI agriculture companies — FarmWise, SeeTree, Smart Ag and John Deere-owned Blue River — is plowing this field.

Farm Tech 2.0

Early agriculture tech was just scratching the surface of what is possible. Applying infrared, it focused on “the green on brown problem,” in which herbicides were applied uniformly to plants — crops and weeds —  versus dirt, blasting all plants, said Serrat, the company’s CTO.

Today, the sustainability race is on to treat “green on green,” or just the weeds near the crop, said Serrat.

“Making the distinction between weeds and crops and act in real time accordingly — this is where everyone is fighting for — that’s the actual holy grail,” he said. “To achieve this requires split-second inference in the field with NVIDIA GPUs running computer vision.”

Losses in corn yields due to ineffective treatment of weeds can run roughly 15 percent to 20 percent, according to Bilberry.

The startup’s customers for smart sprayers include agriculture equipment companies Agrifac, Goldacres, Dammann and Berthoud.

Cutting Back Chemicals

Bilberry deploys its NVIDIA Jetson-powered weed recognition on tractor booms that can span a U.S. football field — about 160 feet. It runs 16 cameras on 16 Jetson TX2 modules and can analyze weeds at 17 frames per second for split-second herbicide squirts while traveling 15 miles per hour.

To achieve this blazing-fast inference performance for rapid recognition of weeds, Bilberry exploited the NVIDIA JetPack SDK for TensorRT optimizations of its algorithms. “We push it to the limits,” said Serrat.

Bilberry tapped into what’s known as INT8 weight quantization, which enables more efficient application of deep learning models, particularly helpful for compact embedded systems in which memory and power restraints rule. This allowed them to harness 8-bit integers instead of floating-point numbers, and moving to integer math in place of floating-point helps reduce memory and computing usage as well as application latency.

Bilberry is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

Winners: Environment, Yields

The startup’s smart sprayers can now dramatically reduce herbicide usage by pinpointing treatments. That can make an enormous difference on the runoff of chemicals into the groundwater, the company says. It can also improve plant yields by reducing the friendly fire on crops.

“You need to apply the right amount of herbicides to weeds — if you apply too little, the weed will keep growing and creating new seeds. Bilberry can do this at a rate of 242 acres per hour, with our biggest unit” said Serrat.

The focus on agriculture chemical reduction comes as Europe tightens down on carbon cap limits affecting farmers and as consumers embrace organic foods. U.S. organic produce sales in 2020 increased 14 percent to $8.5 billion from a year ago, according to data from Nielsen.

Potato-Sorting Problem

Bilberry recently launched a potato-sorting application in partnership with Downs. Potatoes are traditionally handled by sorting potatoes moving slowly across a conveyor belt. But it’s difficult for food processors to get the labor, and the monotonous work is hard to stay focused on for hours, causing errors.

“It’s really boring — doing it all day, you become crazy,” said Serrat. “And it’s seasonal, so when they need someone, it’s now, and so they’re always having problems getting enough labor.”

This makes it a perfect task for AI. The startup trained its potato-sorting network to see bad potatoes, green potatoes, cut potatoes, rocks and dirt clods among the good spuds. And applying the Jetson Xavier to this vision task, the AI platform can send a signal to one of the doors at the end of the conveyor belt to only allow good potatoes to pass.

“This is the part I love, to build software that handles something moving and has a real impact,” he said.

 

 

The post Harvesting AI: Startup’s Weed Recognition for Herbicides Grows Yield for Farmers appeared first on The Official NVIDIA Blog.

Read More

We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever

Call it an intellectual Star Wars bar. You could run into just about anything at GTC.

Princeton’s William Tang would speak about using deep learning to unleash fusion energy, UC Berkeley’s Gerry Zhang would talk about hunting for alien signals, Airbus A3’s Arne Stoschek would describe flying autonomous pods.

Want to catch it all? Run. NVIDIA’s GPU Technology Conference has long been almost too much to take in — even if you had fresh sneakers and an employer willing to give you a few days.

But a strange thing happened when this galaxy of astronomers and business leaders and artists and game designers and roboticists went virtual, and free. More people showed up. Suddenly this galaxy of content and connections is anything but far away.

100,000+ Attendees

GTC, which kicks off April 12 with NVIDIA CEO Jensen Huang’s keynote, is a technology conference like no other because it’s not just about technology. It’s about putting technology to work to accelerate what you do, (just about) whatever you do.

We’re expecting more than 100,000 attendees to log into our latest virtual event. We’ve lined up more than 1,500 sessions, and more than 2,200 speakers. That’s more than 1,100 hours of content from 11 industries and in 13 broad topic areas.

There’s no way we could have done this if it wasn’t virtual. And now that it’s entirely virtual — right down to the “Dinner with Strangers” networking event — you can consume as much as want. No sneakers required.

For Business Leaders

Our weeklong event kicks off with a keynote on April 12 at 8:30 a.m. PT from NVIDIA founder and CEO Jensen Huang. It’ll be packed with demos and news.

Following the keynote, you’ll hear from execs at top companies, including Girish Bablani, corporate vice president for Microsoft Azure; Rene Haas, president of Arm’s IP Products Group; Daphne Koller, founder and CEO of Insitro and co-founder of Coursera; Epic Games CTO Kim Libreri; and Hildegard Wortmann, member of the board of management at Audi AG.

They’ll join leaders from Adobe, Amazon, Facebook, GE Renewable Energy, Google, Microsoft, and Salesforce, among many others.

For Developers and Those Early in Their Careers

If you’re just getting started with your career, our NVIDIA Deep Learning Institute will offer nine instructor-led workshops on a wide range of advanced software development topics in AI, accelerated computing and data science.

We also have a track of 101/Getting Started talks from our always popular “Deep Learning Demystified” series. These sessions can help anyone get oriented on the fundamentals of accelerated data analytics, high-level use cases and problem-solving methods — and how deep learning is transforming every industry.

Sessions will be offered live, online, in many time zones and in English, Chinese, Japanese and Korean. Participants can earn an NVIDIA DLI certificate to demonstrate subject-matter competency.

We’re also working with minority-serving institutions and organizations to offer their communities free seats for daylong hands-on certification classes. GTC is a forum for all communities to engage with the leading edge of AI and other groundbreaking technologies.

For Technologists

If you’re a technologist, you’ll be able to meet the minds that have created the technologies that have defined our era.

GTC will host three Turing Award winners — Yoshua Bengio, Geoffrey Hinton, Yann LeCun — whose work in deep learning has upended the technological landscape of the 21st century.

GTC will also host nine Gordon Bell winners, people who have brought the power of accelerated computing to bear on the most significant scientific challenges of our time.

Among them are Rommie Amaro, of UC San Diego; Lillian Chong of the University of Pittsburgh; computational biologist Arvind Ramanathan of Argonne National Lab; and James Phillips, a senior research programmer at the University of Illinois.

For Creators and Designers

If you’re an artist, designer or game developer, accelerated computing has long been key to creative industries of all kinds — from architecture to gaming to moviemaking.

Now, with AI, accelerated computing is being woven into the latest art. With our AI Art Gallery, 16 artists will showcase creations developed with AI.

You’ll also have multiple opportunities to participate. Highlights include a live, music-making workshop with the team from Paris-based AIVA and beatboxing sessions with Japanese composer Nao Tokui.

For Entrepreneurs and Investors

If you’re looking to build a new business — or fund one — you’ll find content by the fistfull. Start by loading up your calendar with our four AI Day for VC sessions, April 14.

Then browse sessions spotlighting startups in industries as diverse as healthcare, agriculture, and media and entertainment. Sessions will also touch on regions around the world, including Korean startups driving the self-driving car revolution, Taiwanese healthcare startups and Indian AI startups.

For Networking

While this conference may be virtual, GTC still offers plenty of networking. To connect attendees and speakers from a wide array of backgrounds, we’re continuing our longstanding “Dinner with Strangers” tradition. Attendees will have the opportunity to sit down, over Zoom, with others from their industry.

NVIDIA employee resource communities will host events including Growth for Women in Tech, the Queer in AI Mixer, the Black in AI Mixer and the LatinX in AI Mixer. We’re also launching “AI: Making (X) Better,” a series of talks featuring NVIDIA leaders from underrepresented communities who will discuss their path to AI.

Enough About Us, Make GTC About You

GTC offers an opportunity to engage with groundbreaking technologies like AI-accelerated data centers, deep learning for scientific discoveries, healthcare breakthroughs, next-generation collaboration and more.

Our advice? Register now, it’s free. Block off time in your calendar for the keynote April 12. Then hit the search bar on the conference page and look for content related to what you do — and what interests you.

Suddenly, the conference that’s all about accelerating everything is all about accelerating you.

The post We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever appeared first on The Official NVIDIA Blog.

Read More

Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks

When freak lightning ignited massive wildfires across Northern California last year, it also sparked efforts from data scientists to improve predictions for blazes.

One effort came from SpaceML, an initiative of the Frontier Development Lab, which is an AI research lab for NASA in partnership with the SETI Institute. Dedicated to open-source research, the SpaceML developer community is creating image recognition models to help advance the study of natural disaster risks, including wildfires.

SpaceML uses accelerated computing on petabytes of data for the study of Earth and space sciences, with the goal of advancing projects for NASA researchers. It brings together data scientists and volunteer citizen scientists on projects that tap into the NASA Earth Observing System Data and Information System data. The satellite information came from recorded images of Earth — 197 million square miles —  daily over 20 years, providing 40 petabytes of unlabeled data.

“We are lucky to be living in an age where such an unprecedented amount of data is available. It’s like a gold mine, and all we need to build are the shovels to tap its full potential,” said Anirudh Koul, machine learning lead and mentor at SpaceML.

Stoked to Make Difference

Koul, whose day job is a data scientist at Pinterest, said the California wildfires damaged areas near his home last fall. The San Jose resident and avid hiker said they scorched some of his favorite hiking spots at nearby Mount Hamilton. His first impulse was to join as a volunteer firefighter, but instead he realized his biggest contribution could be through lending his data science chops.

Koul enjoys work that helps others. Before volunteering at SpaceML, he led AI and research efforts at startup Aira, which uses augmented reality glasses to dictate for the blind what’s in front of them with image identification paired to natural language processing.

Aira, a member of the NVIDIA Inception accelerator program for startups in AI and data science, was acquired last year.

Inclusive Interdisciplinary Research 

The work at SpaceML combines volunteers without backgrounds in AI with tech industry professionals as mentors on projects. Their goal is to build image classifiers from satellite imagery of Earth to spot signs of natural disasters.

Groups take on three-week projects that can examine everything from wildfires and hurricanes to floods and oil spills. They meet monthly with scientists from NASA with domain expertise in sciences for evaluations.

Contributors to SpaceML range from high school students to graduate students and beyond. The work has included participants from Nigeria, Mexico, Korea and Germany and Singapore.

SpaceML’s team members for this project include Rudy Venguswamy, Tarun Narayanan, Ajay Krishnan and Jeanessa Patterson. The mentors are Koul, Meher Kasam and Siddha Ganju, a data scientist at NVIDIA.

Assembling a SpaceML Toolkit

SpaceML provides a collection of machine learning tools. Groups use it to work on such tasks as self-supervised learning using SimCLR, multi-resolution image search, and data labeling, among other tasks. Ease of use is key to the suite of tools.

Among their pipeline of model-building tools, SpaceML contributors rely on NVIDIA DALI for fast preprocessing of data. DALI helps with unstructured data unfit to feed directly into convolutional neural networks to develop classifiers.

“Using DALI we were able to do this relatively quickly,” said Venguswamy.

Findings from SpaceML were published at the Committee on Space Research (COSPAR) so that researchers can replicate their formula.

Classifiers for Big Data

The group developed Curator to train classifiers with a human in the loop, requiring fewer labeled examples because of its self-supervised learning. Curator’s interface is like Tinder, explains Koul, so that novices can swipe left on rejected examples of images for their classifiers or swipe right for those that will be used in the training pipeline.

The process allows them to quickly collect a small set of labeled images and use that against the GIBS Worldview set of the satellite images to find every image in the world that’s a match, creating a massive dataset for further scientific research.

“The idea of this entire pipeline was that we can train a self-supervised learning model against the entire Earth, which is a lot of data,” said Venguswamy.

The CNNs are run on instances of NVIDIA GPUs in the cloud.

To learn more about SpaceML, check out these speaker sessions at GTC 2021:

Space ML: Distributed Open-Source Research with Citizen-Scientists for Advancing Space Technology for NASA (GTC registration required to view)

Curator: A No-Code, Self-Supervised Learning and Active Labeling Tool to Create Labeled Image Datasets from Petabyte-Scale Imagery (GTC registration required to view)

The GTC keynote can be viewed on April 12 at 8:30 a.m. Pacific time and will be available for replay.

Photo credit: Emil Jarfelt, Unsplash

The post Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: All My Friends Know the Outriders

Spring. Rejuvenation. Everything’s blooming. And GeForce NOW adds even more games.

Nature is amazing.

For today’s GFN Thursday, we’re taking a look at all the exciting games coming to GeForce NOW in April.

Starting with today’s launches:

Outriders on GeForce NOW

OUTRIDERS (day-and-date release on Epic Games Store and Steam)

As mankind bleeds out in the trenches of Enoch, you’ll create your own Outrider and embark on a journey across the hostile planet. Check out this highly anticipated 1-3 player co-op RPG shooter set in an original, dark and desperate sci-fi universe.

Members can also look for the following titles later today:

  • Narita Boy (day-and-date release on Steam, March 30)
  • Tales of the Neon Sea (Free on Epic Games Store, April 1-8)
  • A-Train PC Classic (Steam)
  • Endzone – A World Apart (Steam)
  • Forts (Steam)
  • Might & Magic X Legacy (Ubisoft Connect)
  • Mr. Prepper (Steam)
  • Nine Parchments (Steam)
  • Re:ZERO -Starting Life in Another World- The Prophecy of the Throne (Steam)
  • Rhythm Doctor (Steam)
  • Shadowrun: Hong Kong – Extended Edition (Steam)
  • Styx: Master of Shadows (Steam)

April Anticipation

There’s a wide and exciting variety of games coming soon to GeForce NOW:

The Legend of Heroes: Trails of Cold Steel IV (Steam)The long-awaited finale to the epic engulfing a continent comes to a head in the final chapter of the Trails of Cold Steel saga!

R-Type Final 2 (Steam)

The legendary side-scroller is back with beautiful 3D graphics, exhilarating shoot-’em-up gameplay, and a multitude of stages, ships and weapons that will allow you to conduct a symphony of destruction upon your foes.

Turnip Boy Commits Tax Evasion (Steam)

Play as an adorable yet trouble-making turnip. Avoid paying taxes, solve plantastic puzzles, harvest crops and battle massive beasts all in a journey to tear down a corrupt vegetable government!

And that’s not all — check out even more games you’ll be able to stream from the cloud in April:

Oops, We Did It Again

Thronebreaker: The Witcher Tales on GeForce NOW
In March, we added support for the GOG.COM versions of The Witcher series, including Thronebreaker: The Witcher Tales.

In March we said 21 titles were coming to GeForce NOW.

Turns out we added 14 additional games for a grand total of 35. That’s more than one a day.

  • Do Not Feed the Monkeys (Steam)
  • Evoland Legendary Edition (Steam)
  • GoNNER (Steam)
  • Iron Conflict (Steam)
  • Paradise Lost (Steam)
  • Railroad Corporation (Steam)
  • Snooker 19 (Steam)
  • System Shock: Enhanced Edition (Steam)
  • Stronghold: Warlords (Steam)
  • Sword and Fairy 7 Trial (Steam)
  • The Witcher 2: Assassins of Kings Enhanced Edition (GOG.COM)
  • The Witcher 3: Wild Hunt – Game of the Year Edition (GOG.COM)
  • The Witcher Adventure Game (GOG.COM)
  • Thronebreaker: The Witcher Tales (GOG.COM)
  • Wanba Warriors (Steam)

It should be an exciting month, members. What are you going to play? Let us know on Twitter or in the comments below.

The post GFN Thursday: All My Friends Know the Outriders appeared first on The Official NVIDIA Blog.

Read More

All for the ‘Gram: The Next Big Thing on Social Media Could Be a Smarter Camera

Thanks to smartphones you can now point, click, and share your way to superstardom.

Smartphones have made creating—and consuming—digital photos and video on social media a pastime for billions. But the content those phones can create can’t compare to the quality of those made with a good camera.

British entrepreneur Vishal Kumar’s startup Photogram wants to combine the benefits of mirrorless cameras — replete with interchangeable lenses, big sensors, and often complex, fiddly controls — with the smarts and connectivity of your smartphone.

The Alice Camera, due for release in October for around $760 to early backers on crowdfunding platform Indiegogo, is, first of all, a compact, mirrorless camera.

The Alice camera focuses on what cameras do well…

But its machined aluminum body is sleeker than other mirrorless cameras. There’s no onboard screen or viewfinder of any kind — just a shutter button, a control wheel, and a cold-shoe adapter.

Instead, photographers mount their smartphone to the Alice camera. Alice focuses on what smartphones can’t do: it houses a big, light-soaking sensor, and it can be screwed to a wide array of lenses.

Alice links to your smartphone via a 5Ghz Wi-Fi connection — so your smartphone can do what traditional cameras can’t do. An app on your phone provides not just an easily updatable software interface on the smartphone’s big, bright screen, but the connectivity to easily share images and stream video.

The secret ingredient: an AI built using NVIDIA GPU accelerated deep learning to help photographers wring the most out of Alice’s hardware.

A Star Is Born

Kumar, whose startup is part of NVIDIA Inception, an acceleration program that offers go-to-market support, expertise and technology for AI, data science and HPC startups, sees the opportunity for this device literally staring everyone in the face.

There are more than 1 billion active Instagram users, more than 1 billion active TikTok users, and more than 2.3 billion YouTube users. Social media influencers who can deliver great content to all these users become instantaneous global superstars, Kumar explains.

Consider Charli D’Amelio, who has more than 115 million followers, or Addison Rae, with more than 86 million, or YouTube star Mr. Beast, with more than 71 million followers.

Content creators like these may rely, in large part, on smartphone users to provide an audience. Most, however, left their smartphones behind long ago, adopting more sophisticated gear to create their content.

Firsthand Frustration

…and relies on a user’s smartphone, mounted on the back, to do what smartphones do well.

Kumar, a self-described cultural data scientist, learned this firsthand working as a data scientist at Sotheby’s. Great photography was vital to sparking worldwide interest in the auction house’s offerings on social media.

“I was thinking a lot about how data science and machine learning and artificial intelligence could be applied to create video and imagery,” Kumar says. “I was using a camera all the time to create video content and becoming increasingly frustrated with operating them.”

A Camera for Creators

And, for serious content creators, a better camera than the smartphone they already carry around with them is definitely needed.

In part, that’s because you can only capture so much light in the relatively compact sensors crammed into today’s smartphones, Kumar explains. Bigger sensors soak up more light, so they can capture a high-quality image even in very dim lighting conditions, among other benefits.

So Alice is built around a Sony IMX294 10.7 megapixel 4/3 sensor optimized for high-quality and full-width 4K video. The sensor is eight times bigger than that of a typical smartphone.

In front of that big sensor is a Micro Four Third lens mount. The compact interchangeable lens system gives users access to more than 100 lens options from Olympus, Panasonic, and specialty lens makers such as Sigma, Tamron, and Tokina.

Users will be able to choose from 16mm-equivalent fish-eye lenses for a super-wide angle of view or 800mm-equivalent telephone zoom lenses able to get clear, undistorted pictures of objects far away, with plenty of options in between.

Like more traditional cameras, Alice can use a wide variety of lenses.

A Classic Deep Learning Problem

All of this is why professional photographers continue to keep big big, expensive cameras in their toolkit. But the masses of content creators may never get the most out of dedicated cameras, Kumar explains.

Making more expertise more accessible to more people is a classic deep learning problem. And the technology’s roots in computer vision and image recognition make training an AI to operate a camera a natural fit.

To build Alice’s AI, Photogram CTO Liam Donovan trained an NVIDIA GPU accelerated convolutional neural network using millions of out-of-focus and in-focus images, teaching an AI to distinguish between good and bad photos.

Optimized for What You Do

The AI is a crucial element in the end-to-end deep learning image processing pipeline Photogram’s team has built.

The result is an AI that can control and improve focusing, change exposure and automatically adjust white balance, and even perform automatic image stabilization.

Revealed last September and positioned by Kumar as the “AI camera for creators,” in February the project has raised $200,000 from more than 250 backers — 7x Kumar’s original goal.

Users will eventually make Alice better at whatever photography they do, Kumar explains.

“Let’s say you’re a wedding photographer, or you like to shoot cats or clothes,” Kumar says. “We want people to be able to optimize our models and retrain them so their Alice camera can be more optimized for the photography they do.”

Smartphone Synergy

Revealed last September and positioned by Kumar as the “AI camera for creators,” in February the project has raised $200,000 from more than 250 backers — 7x Kumar’s original goal.

To be sure, like all early-stage crowded-funded projects, Alice is still very much a work in progress. The plan, for now, is to offer Alice to early funders on Indiegogo first and to the general public later.

However Alice is received, at first, with billions around the world creating and sharing images every day, sooner or later the idea is bound to click.

 

 

Image credits: Photogram

 

 

The post All for the ‘Gram: The Next Big Thing on Social Media Could Be a Smarter Camera appeared first on The Official NVIDIA Blog.

Read More

Now Hear This: Startup Gives Businesses a New Voice

Got a conflict with your 2pm appointment? Just spin up a quick assistant that takes good notes and when your boss asks about you even identifies itself and explains why you aren’t there.

Nice fantasy? No, it’s one of many use cases a team of some 50 ninja programmers, AI experts and 20 beta testers is exploring with Dasha. And they’re looking for a few good developers to join a beta program for the product that shows what’s possible with conversational AI on any device with a mic and a speaker.

“Conversational AI is going to be [seen as] the biggest paradigm shift of the last 40 years,” the chief executive and co-founder of Dasha, Vlad Chernyshov, wrote in a New Year’s tweet.

Using the startup’s software, its partners are already creating cool prototypes that could help make that prediction come true.

For example, a bank is testing Dasha to create a self-service support line. And the developer of a popular console game is using it to create an in-game assistant that players can consult via a smartwatch on their character’s wrist.

Custom Conversations Created Quickly

Dasha’s development tool lets an average IT developer use familiar library calls to design custom dialogs for any business process. They can tap into the startup’s unique capabilities in speech recognition, synthesis and natural-language processing running on NVIDIA GPUs in the cloud.

“We built all the core technology in house because today’s alternatives have too high a latency, the voice does not sound natural or the level of controls is not flexible enough for what customers want to do,” Chernyshov said.

Dasha platform for conversational AI
NVIDIA GPUs speed up the AI engine in Dasha’s conversational AI platform.

The startup prides itself on its software that both creates and understands speech with natural inflections of emotion, breathing—even the “ums” and “ahs” that pepper real conversations. That level of fluency is helping early users get better responses from programs like Dasha Delight that automates post-sales satisfaction surveys.

Delighting Customers with Conversational AI

A bank that caters to small businesses gave Delight to its two-person team handling customer satisfaction surveys. With automated surveys, they covered more customers and even developed a process to respond to complaints, sometimes with problem fixes in less than an hour.

Separately, the startup developed a smartphone app called Dasha Assistant. It uses conversational AI to screen out unwanted sales calls but put through others like the pizza man confirming an order.

Last year, the company even designed an app to automate contact tracing for COVID-19.

An Ambitious Mission in AI

While one team of developers pioneers such new use cases, a separate group of researchers at Dasha pushes the envelope in realistic speech synthesis.

“We have a mission of going after artificial general intelligence, the ability for computers to understand like humans do, which we believe comes through developing systems that speak like humans do because speech is so closely tied to our intelligence,” said Chernyshov.

Below: Chernyshov demos a customer service experience with Dasha’s conversational AI.

He’s had a passion for dreaming big ideas and coding them up since 2007. That’s when he built one of the first instant messaging apps for Android at his first startup while pursuing his computer science degree in the balmy southern Siberian city of Novosibirsk, in Russia.

With no venture capital community nearby, the startup died, but that didn’t stop a flow of ideas and prototypes.

By 2017 Chernyshov learned how to harness AI and wrote a custom program for a construction company. It used conversational AI to automate the work of seeking a national network of hundreds of dealers.

“We realized the main thing preventing mainstream adoption of conversational AI was that most automated systems were really stupid and nobody was focused on making them comfortable and natural to talk with,” he said.

A 7x Speed Up With GPUs

To get innovations to the market quickly, Dasha runs all AI training and inference work on NVIDIA A100 Tensor Cores and earlier generation GPUs.

The A100 trains Dasha’s latest models for speech synthesis in a single day, 7x faster than previous-generation GPUs. In one of its experiments, Dasha trained a Transformer model 1.85x faster using four A100s than with eight V100 GPUs.

“We would never get here without NVIDIA. Its GPUs are an industry standard, and we’ve been using them for years on AI workflows,” he said.

NVIDIA software also gives Dasha traction. The startup eases the job of running AI in production with TensorRT, NVIDIA code that can squeeze the super-sized models used in conversational AI so they deliver inference results faster with less memory and without losing accuracy.

Mellotron, a model for speech synthesis developed by NVIDIA, gave Dasha a head start creating its custom neural networks for fluent systems.

“We’re always looking for better model architecture to do faster inference and speech synthesis, and Mellotron is superior to other alternatives,” he said.

Now, Chernyshov is looking for a few ninja programmers in a handful of industries he wants represented in the beta program for Dasha. “We want to make sure every sector gets a voice,” he quipped.

The post Now Hear This: Startup Gives Businesses a New Voice appeared first on The Official NVIDIA Blog.

Read More

Art and Music in Light of AI

In the sea of virtual exhibitions that have popped up over the last year, the NVIDIA AI Art Gallery offers a fresh combination of incredible visual art, musical experiences and poetry, highlighting the narrative of an emerging art form based on AI technology.

The online exhibit — part of NVIDIA’s GTC event — will feature 13 standout artists from around the world who are pioneering the use of AI within their respective fields in music, visual arts and poetry.

The exhibit complements what has become the world’s premier AI conference. GTC, running April 12-16, bringing together researchers from industry and academia, startups and Fortune 500 companies.

A Uniquely Immersive Experience

Unlike other virtual galleries that only depict the artist’s final piece, the AI Art Gallery is a uniquely immersive experience.  Visitors can explore each artist’s creative process, the technologies used to bring their creations to life, and the finished works that shine new light on AI’s potential.

In addition to artists from last year’s AI Art Gallery, including Daniel Ambrosi, Helena Sarin, Pindar Van Arman, Refik Anadol, Scott Eaton, and Sofia Crespo and Entangled Others, next month’s exhibition features  these prominent artists:

  • AIVA (music) – Pierre Barreau and Denis Shtefan, co-founders of Paris-based AI music startup AIVA, combine their computer science and musical composition training to generate personalized soundtracks using AI.
  • Allison Parrish (poet) – Parrish is a computer programmer, poet and educator whose “poetry bots,” sit at the intersection of creativity, language and AI.
  • Dadabots + Keyon Christ (music) – Music and hacker duo CJ Carr and Zack Zuckowski utilize deep learning algorithms based on large datasets of recorded music to generate expressions of sound that have never existed, yet imply the sounds of soul music. Artist and producer Keyon Christ, formerly known as Mitus, rings his instantly recognizable sound to the tracks generated by Dadabots’ AI.
  • 64/1 + Harshit Agrawal (visual art) – Brothers Karthik Kalyanaraman and Raghava KK joined with Bangalore-based Harshit Agrawal to create artwork that combines their backgrounds in social studies, art, and emerging technologies. Their project, Strange Genders work, uses AI to understand and showcase how people of India represent gender visually.
  • Holly Herndon (music) – Herdon and Matt Dryhurst used deep learning to develop a voice synthesizer, combining a large corpora of voices — an entirely new approach in composition.
  • Nao Tokui + Qosmo (music) – In their project, “Neural Beatbox” Japanese artists Tokui and Qosmo use AI’s somewhat unpredictable behaviors to invite visitors to think and create outside of the box.
  • Stephanie Dinkins (visual art) – Best described as a transmedia artist, Dinkins creates experiences that spark dialogue about race, gender, aging, and future histories.

Attendees of GTC, which has free registration, will have the opportunity to interact with the artists in panel discussions and workshops. Session highlights include:

Panels and Talks

  • AI Representing Natural Forms – April 14, 11 a.m. PT
    • Join a discussion with artists Sofia Crespo, Feilican McCormick, Anna Ridler and Daniel Ambrosi to explore how they use AI in their creative process of generating interpretations of natural forms. NVIDIA Technical Specialist Chris Hebert moderates.*
  • Art in Light of AI – April 13, 8 a.m. PT
    • In a discussion led by media artist Refik Anadol, a panel of artists from around the globe, including Harshit Agrawal, Scott Eaton and Helena Sarin, will compare how they combine their fine art backgrounds with their futuristic art practices.
  • DADABOTS: Artisanal Machine Funk, Generative Black Metal – April 12, 10 a.m. PT
    • Moderated by NVIDIA Senior Engineer Omer Shapira, this panel will feature creators and users of electronic instruments that rely on deep learning for their performance — whether to create entirely new sounds and music from existing recordings or to give the music playing a human form.
  • Using AI to Shape the Language of a Generation – April 12, 12 p.m. PT
    • Join a discussion of how language in the age of AI takes on new forms and tells new stories with Allison Parrish, Stephanie Dinkins and Pindar Van Arman, moderated by NVIDIA’s Heather Schoell.

Workshops

  • Beatbox with Nao Tokui – April 15, 5 p.m. PT
    • Nao Tokui, an artist and researcher based in Japan, will lead a beatbox-making workshop using his web-based app, Neural Beatbox.
  • Music-Making with AIVA – April 14, 9 a.m. PT
    • Join the team from AIVA, who’ll lead a music-making workshop using their web-based music app.

Register to join us at GTC April 12-16, and enjoy the AI Art Gallery and related sessions.

The post Art and Music in Light of AI appeared first on The Official NVIDIA Blog.

Read More

Drum Roll, Please: AI Startup Sunhouse Founder Tlacael Esparza Finds His Rhythm

Drawing on his trifecta of degrees in math, music and music technology, Tlacael Esparza, co-founder and CTO of Sunhouse, is revolutionizing electronic drumming.

Esparza has created Sensory Percussion, a combination of hardware and software that uses sensors and AI to allow a single drum to produce a complex range of sounds depending on where and how the musician hits it.

In the latest installment of the NVIDIA AI Podcast, Esparza spoke with host Noah Kravitz about the tech behind the tool, and what inspired him to create Sunhouse. Esparza has been doing drumstick tricks of his own for many years — prior to founding Sunhouse, he toured with a variety of bands and recorded drums for many albums.

Esparza’s musical skill and programming knowledge formed the basis for Sensory Percussion. Partnering with his brother, Tenoch, and with support from a New York University startup accelerator, Sunhouse was born in 2014.

Since then, it’s become successful with live performers. Esparza is especially proud of its popularity in the New York jazz community and among drumming legends like Marcus Gilmore and Wilco’s Glenn Kotche.

Esparza and Sunhouse customers will be marching to the beat of his drum far into the future — he hints at more musical tech to come.

Key Points From This Episode:

  • Esparza was exposed to the idea of applying deep learning techniques to audio processing while earning his master’s at NYU. He studied under Juan Bello, who is responsible for much of the foundational work on music information retrieval techniques, and audited courses from AI pioneer Yann LeCun.
  • One of Esparza’s goals with Sensory Percussion was to bridge the gap between engineers and musicians. He points out that software is often extremely powerful but complex or easy to use but limited. Sunhouse technology is designed to be an accessible intermediary.

Tweetables:

“It’s about capturing that information and allowing you to use it to translate all that stuff into this electronic realm” — Tlacael Esparza [6:20]

“[Sensory Percussion] ended up actually getting utilized by musicians as more of a full-on composition tool to write and perform entire pieces of music.” — Tlacael Esparza [24:10]

You Might Also Like:

Pierre Barreau Explains How Aiva Uses Deep Learning to Make Music

AI systems have been trained to take photos and transform them into the style of great artists, but now they’re learning about music. Pierre Barreau, head of Luxembourg-based startup Aiva Technologies, talks about the soaring music composed by an AI system and featured on this podcast.

How SoundHound Uses AI to Bring Voice and Music Recognition to Any Platform

SoundHound has leveraged its decade of experience in data analytics to create a voice recognition tool that companies can bake into any product. Mike Zagorsek, SoundHound’s vice president of product marketing, talks about how the company has grown into a major player in voice-driven AI.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

Serial entrepreneur Andrew Mason is making podcast editing easier and more collaborative with his company, Descript Podcast Studio, which uses AI, natural language processing and automatic speech synthesis.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Drum Roll, Please: AI Startup Sunhouse Founder Tlacael Esparza Finds His Rhythm appeared first on The Official NVIDIA Blog.

Read More

Shackling Jitter and Perfecting Ping, How to Reduce Latency in Cloud Gaming

Looking to improve your cloud gaming experience? First, become a master of your network.

Twitch-class champions in cloud gaming shred Wi-Fi and broadband waves. They cultivate good ping to defeat two enemies — latency and jitter.

What Is Latency? 

Latency or lag is a delay in getting data from the device in your hands to a computer in the cloud and back again.

What Is Jitter?

Jitter is the annoying disruption you feel that leads to yelling at your router, (“You’re breaking up, again!”) when pieces of that data (called packets) get sidetracked.

Why Are Latency and Jitter Important?

Cloud gaming works by rendering game images on a custom GFN server that may be miles away, those images are then sent to a game server, and finally appear on the device in front of you.

When you fire at an enemy, your device sends data packets to these servers. The kill happens on the game on those servers, sending back commands that display the win on your screen.

And it all happens in less than the blink of an eye. In technical terms, it’s measured in “ping.”

What Is Ping?

Ping in is the time in milliseconds it takes a data packet to go to the server and back.

Anyone with the right tools and a little research can prune their ping down to less than 30 milliseconds. Novices can get pummeled with as much as 300 milliseconds of lag. It’s the difference between getting or being the game-winning kill.

Before we describe the ways to crush latency and jitter, let’s take its measure.

To Measure Latency, Test Your Ping

Speedtest.net and speedsmart.net are easy to take, but they only measure the latency to a generic server that may be in your network.

It doesn’t measure the time it takes to get data to and from the server you’re connecting to for your cloud gaming session. For a more accurate gauge of your ping, some cloud gaming services, such as NVIDIA GeForce NOW, sport their own built-in test for network latency. Those network tests will measure the ping time to and the respective cloud gaming server.

Blazing Wi-Fi and broadband can give your ping some zing.

My Speedtest result showed a blazing 10 milliseconds, while Speedsmart measured a respectable 23ms.

Your mileage may vary, too. If it sticks its head much up over 30ms, the first thing to do is check your ISP or network connection. Still having trouble? Try rebooting. Turn your device and your Wi-Fi network off for 10 seconds, then turn them back on and run the tests again.

How to Reduce Latency and Ping

If the lag remains, more can be done for little or no cost, and new capabilities are coming down the pike that will make things even better.

First, try to get off Wi-Fi. Simply running a standard Ethernet cable from your Wi-Fi router to your device can slash latency big time.

If you can’t do that, there are still plenty of ways to tune your Wi-Fi.

Ethernet is faster, but research from CableLabs shows most gamers use Wi-Fi.

Your ping may be stuck in heavy traffic. Turn off anything else running on your Wi-Fi network, especially streaming videos, work VPNs — hey, it’s time for play — and anyone trying to download the Smithsonian archives.

A High Five Can Avoid Interference

If rush-hour traffic is unavoidable on your home network, you can still create your own diamond lane.

Unless you have an ancient Wi-Fi access point (AP, for short — often referred to as a router) suitable for display in a museum, it should support both 2.4- and 5GHz channels.

You can claw back many milliseconds of latency if you shunt most of your devices to the 2.4GHz band and save 5GHz for cloud gaming.

Apps called Wi-Fi analyzers on the Android and iTunes stores can even determine which slices of your Wi-Fi airspace are more or less crowded. A nice fat 80MHz channel in the 5GHz band without much going on nearby is an ideal runway for cloud gaming.

Quash Latency with QoS

If it’s less than a decade old, your AP probably has something called quality of service, or QoS.

QoS can give some apps higher priority than others. APs vary widely in how they implement QoS, but it’s worth checking to see if your network can be set to give priority to cloud gaming.

NVIDIA provides a list of the recommended AP it’s tested with GeForce NOW as well as a support page for how to apply QoS and other techniques.

Take a Position Against Latency

If latency persists, see if you can get physically closer to your AP. Can you move to the same room?

If not, consider buying a mesh network. That’s a collection of APs you can string around your home, typically with Ethernet cables, so you have an AP in every room where you use Wi-Fi.

Some folks suggest trying different positions for your router and your device to get the sweetest reception. But others say this will only shave a millisecond or so off your lag at best, so don’t waste too much time playing Wi-Fi yoga.

Stay Tuned for Better Ping

The good news is more help is on the way. The latest version, Wi-Fi 6, borrows a connection technology from cellular networks (called OFDMA) that reduces signal interference significantly, reducing latency.

So, if you can afford it, get a Wi-Fi 6 AP, but you’ll have to buy a gaming device that supports Wi-Fi 6, too.

Next year, Wi-Fi 6E devices should be available. They’ll sport a new 6MHz Wi-Fi band where you can find fresh channels for gaming.

Coping with Internet Latency

Your broadband connection is the other part of the net you need to set up for cloud gaming. First, make sure your internet plan matches the kind of cloud gaming you want to play.

These days a basic plan tops out at about 15 Mbits/second. That’s enough if your screen can display 1280×720 pixels, aka standard high definition or 720p. If you want a smoother, more responsive experience, step up to 1080p resolution, full high def or even 4K ultra-high def — this requires at least 25 Mbits/s. More is always better.

If you’re playing on a smartphone, 5G cellular services are typically the fastest links, but in some areas a 4G LTE service may be well optimized for gaming. It’s worth checking the options with your cellular provider.

When logging into your cloud gaming service, choosing the closest server can make a world of difference.

For example, using the speedsmart.net test gave me a ping of 29ms from a server in San Francisco, 41 miles away. Choosing a server in Atlanta, more than 2,000 miles away, bloated my ping to 80ms. And forget about even trying to play on a server on another continent.

GeForce NOW members can sit back and relax on this one. The service automatically picks the fastest server for you, even if the server is a bit farther away.

A Broadband Horizon for Cloud Gaming

Internet providers want to tame the latency and jitter in their broadband networks, too.

Cable operators plan to upgrade their software, called DOCSIS 3.1, to create low-latency paths for cloud gamers. A version of the software, called L4S, for other kinds of internet access providers and Wi-Fi APs is also in the works.

Broadband latency should shrink to a fraction of its size (from blue to red in the chart) once low-latency DOCSIS 3.1 software is available and supported.

The approach requires some work on the part of application developers and cloud gaming services. But the good news is engineers and developers across all these companies are engaged in the effort and it promises dramatic reductions in latency and jitter, so stay tuned.

Now you know the basics of navigating network latency to become a champion cloud gamer, so go hit “play.”

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post Shackling Jitter and Perfecting Ping, How to Reduce Latency in Cloud Gaming appeared first on The Official NVIDIA Blog.

Read More

Come Sale Away with GFN Thursday

GFN Thursday means more games for GeForce NOW members, every single week.

This week’s list includes the day-and-date release of Spacebase Startopia, but first we want to share the scoop on some fantastic sales available across our digital game store partners that members will want to take advantage of this very moment.

Discounts for All

GeForce NOW is custom-built for PC gamers, and our open platform philosophy is a hallmark of PC gaming. Gamers are used to buying their games from whichever digital store they choose, and often jump between them during big sales.

That’s why we support multiple digital game stores, as well as the games already in your libraries. Why lock you down if there are great deals to be had?

When supported games go on sale, members can purchase and instantly play from GeForce NOW’s cloud gaming servers. Plus, they know that they’re adding the real PC version to their gaming library for playing on their local machines whenever they like.

Your Base, Your Rules

Spacebase Startopia on GeForce NOW
Keeping your inhabitants happy is half the battle in Spacebase Startopia, joining GeForce NOW on March 26.

This open platform philosophy is one of the key reasons why Kalypso made its games available on GeForce NOW. The publisher, known for hit strategy sims like the Tropico series, understands that bringing its games to GeForce NOW means another easy way to welcome new gamers, and keep them playing.

“We want gamers to be able to play our games as easily as possible,” says Marco Nier, international marketing manager at Kalypso. “If they can play on their local machine? Great! If they can use GeForce NOW on their laptop, or Mac, or phone? Even better.”

Kalypso’s newest game, Spacebase Startopia, joins the GeForce NOW library when it releases tomorrow, March 26. Developed by Realmforge Studios, the game challenges you to manage and maintain your own space station, and mixes economic simulation, strategic conquest and real-time strategy gameplay with more than a dash of humor. It’s a game we’ve been excited about for a while, and members will be able to stream every moment across all of their devices.

Spacebase Startopia on GeForce NOW
You decide how to renovate your space station, in hopes it becomes a huge interstellar hub for alien visitors.

Additionally, to celebrate the Spacebase Startopia launch, Kalypso has put some of its greatest GeForce NOW-enabled games on sale on Steam.

  • Commandos 2 – HD Remaster – 40 percent off until March 29 (Steam)
  • Immortal Realms: Vampire Wars – 50 percent off until March 29 (Steam)
  • Praetorians – HD Remaster – 40 percent off until March 29 (Steam)

More Savings

If you’re still figuring out how to fill your weekend, look no further. We’re thrilled to share additional deals our members can take advantage of:

Ubisoft

Square Enix

  • Just Cause 3 – 85 percent off until March 29 (Steam)
  • Just Cause 4: Reloaded – 80 percent off until March 29 (Steam)
  • Rise of the Tomb Raider: 20 Year Celebration – 80 percent off until March 29 (Steam)
  • Shadow of the Tomb Raider: Definitive Edition – 75 percent off until March 29 (Steam)

Deep Silver

  • Gods Will Fall – 20 percent off until April 8 (Epic Games Store)
  • Metro Exodus: Standard Edition – 66 percent off until April 8 (Epic Games Store)
  • Outward – 70 percent off until March 29 (Steam)

Other GFN Thursday Favorites

  • Car Mechanic Simulator 2018 – 60 percent off until April 2 (Steam)
  • Farm Manager 2018 – 90 percent off until April 2 (Steam)
  • Lonely Mountains: Downhill – 33 percent off until March 31 (Steam)
  • Superhot – 60 percent off until March 29 (Steam)
  • Superhot: Mind Control Delete – 60 percent off until March 29 (Steam)
  • Thief Simulator – 62 percent off until April 2 (Steam)
  • Tower of Time – 75 percent off until April 1 (Steam)

Finding offers like these and more is never out of reach. Be sure to check out the Sales and Special Offers row in the GeForce NOW app.

Play Overcooked! All you can Eat! on GeForce NOW
Overcooked! All You Can Eat! is one of 12 games joining the GeForce NOW library this week.

Let’s Play

If all that wasn’t enough new gaming goodness, don’t forget it’s still GFN Thursday, and that means more additions to the GeForce NOW library. Members can look for the following:

  • Spacebase Startopia (day-and-date release on Steam and Epic Games Store, March 26)
  • Overcooked! All You Can Eat! (day-and-date release on Steam, March 23)
  • Paradise Lost (day-and-date release on Steam, March 24)
  • Door Kickers (Steam)
  • Evoland Legendary Edition (Steam)
  • Iron Conflict (Steam)
  • Railroad Corporation (Steam)
  • Sword and Fairy 7 Trial (Steam)
  • Thief Gold (Steam)
  • Trackmania United Forever (Steam)
  • Worms Reloaded (Steam)
  • Wrench (Steam)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Come Sale Away with GFN Thursday appeared first on The Official NVIDIA Blog.

Read More