Helping people understand AI

If you’re like me, you may have noticed that AI has become a part of daily life. I wake up each morning and ask my smart assistant about the weather. I recently applied for a new credit card and the credit limit was likely determined by a machine learning model. And while typing the previous sentence, I got a word choice suggestion that “probably” might flow better than “likely,” a suggestion powered by AI.

As a member of Google’s Responsible Innovation team, I think a lot about how AI works and how to develop it responsibly. Recently, I spoke with Patrick Gage Kelley, Head of Research Strategy on Google’s Trust & Safety team, to learn more about developing products that help people recognize and understand AI in their daily lives.

How do you help people navigate a world with so much AI?

My goal is to ensure that people, at a basic level, know how AI works and how it impacts their lives. AI systems can be really complicated, but the goal of explaining AI isn’t to get everyone to become programmers and understand all of the technical details — it’s to make sure people understand the parts that matter to them.

When AI makes a decision that affects people (whether it’s recommending a video or qualifying for a loan), we want to explain how that decision was made. And we don’t want to just provide a complicated technical explanation, but rather, information that is meaningful, helpful, and equips people to act if needed.

We also want to find the best times to explain AI. Our goal is to help people develop AI literacy early, including in primary and secondary education. And when people use products that rely on AI (everything from online services to medical devices), we want to include a lot of chances for people to learn about the role AI plays, as well as its benefits and limitations. For example, if people are told early on what kinds of mistakes AI-powered products are likely to make, then they are better prepared to understand and remedy situations that might arise.

Do I need to be a mathematician or programmer to have a meaningful understanding of AI?

No! A good metaphor here is financial literacy. While we may not need to know every detail of what goes into interest rate hikes or the intricacies of financial markets, it’s important to know how they impact us — from paying off credit cards, to buying a home, or paying for student loans. In the same way, AI explainability isn’t about understanding every technical aspect of a machine learning algorithm – it’s about knowing how to interact with it and how it impacts our daily lives.

How should AI practitioners — developers, designers, researchers, students, and others — think about AI explainability?

Lots of practitioners are doing important work on explainability. Some focus on interpretability, making it easier to identify specific factors that influence a decision. Others focus on providing “in-the-moment explanations” right when AI makes a decision. These can be helpful, especially when carefully designed. However, AI systems are often so complex that we can’t rely on in-the-moment explanations entirely. It’s just too much information to pack into a single moment. Instead, AI education and literacy should be incorporated into the entire user journey and built continuously throughout a person’s life.

More generally, AI practitioners should think about AI explainability as fundamental to the design and development of the entire product experience. At Google, we use our AI Principles to guide responsible technology development. In accordance with AI Principle #4: “Be accountable to people,” we encourage AI practitioners to think about all the moments and ways they can help people understand how AI operates and makes decisions.

How are you and your collaborators working to improve explanations of AI?

We develop resources that help AI practitioners learn creative ways to incorporate AI explainability in product design. For example, in the PAIR Guidebook we launched a series of ethical case studies to help AI practitioners think through tricky issues and hone their skills for explaining AI. We also do fundamental research like this paper to learn more about how people perceive AI as a decision-maker, and what values they would like AI-powered products to embody.

We’ve learned that many AI practitioners want concrete examples of good explanations of AI that they can build on, so we’re currently developing a story-driven visual design toolkit for explanations of a fictional AI app. The toolkit will be publicly available, so teams in startups and tech companies everywhere can prioritize explainability in their work.

An illustration of a sailboat navigating the coast of Maine

The visual design toolkit provides story-driven examples of good explanations of AI.

I want to learn more about AI explainability. Where should I start?

This February, we released an Applied Digital Skills lesson, “Discover AI in Daily Life.” It’s a great place to start for anyone who wants to learn more about how we interact with AI everyday.

We also hope to speak about AI explainability at the upcoming South by Southwest Conference. Our proposed session would dive deeper into these topics, including our visual design toolkit for product designers. If you’re interested in learning more about AI explainability and our work, you can vote for our proposal through the SXSW PanelPicker® here.

Read More

Easy A: GeForce NOW Brings Higher Resolution and Frame Rates for Browser Streaming on PC

Class is in session this GFN Thursday as GeForce NOW makes the up-grade with support for higher resolutions and frame rates in Chrome browser on PC. It’s the easiest way to spice up a boring study session.

When the lecture is over, dive into the six games joining the GeForce NOW library this week, where new adventure always awaits.

The Perfect Study Break

All work and no play isn’t the GeForce NOW way. No one should be away from their games, even if they’re going back to school. GeForce NOW streams the best PC games across nearly all devices, including low-powered PCs with a Chrome or Edge browser.

1440p Gameplay in Chrome Browser on GeForce NOW
Enabling 1440p 120 FPS for browser streaming is easy: Visit “Settings,” then select “Custom” streaming quality to adjust the resolution and frame rate settings.

RTX 3080 members can now level up their browser gameplay at up to 1440p and 120 frames per second. No app install is required — just open a Chrome or Edge browser on PC, go to play.geforcenow.com, select these new resolutions and refresh rates from the GeForce NOW Settings menu, and jump into games in seconds, with less friction or downloads.

It’s never been easier to explore the more than 1,300 titles in the GeForce NOW library. Have some downtime during lab work? Sneak in a round of Apex Legends. Need a break from a boring textbook? Take a trip to Tevyat in Genshin Impact.

Stay connected with friends for multiplayer — like in Path of Exile’s latest expansion, “Lake of Kalandra” — so even if making your next moves at different schools, the squad can stick together to get into the gaming action.

Here’s Your Homework

Thymesia on GeForce NOW
Save a kingdom fallen to an age of calamity in ‘Thymesia,’ a grueling action-RPG with fast-paced combat.

Pop quiz: What’s the best part of GFN Thursday?

Answer: More games, of course. You all get an A+.

Buckle up for 6 new releases this week:

Finally, for a little extra credit, we’ve got a question for you. Share your answers on Twitter or in the comments below.

The post Easy A: GeForce NOW Brings Higher Resolution and Frame Rates for Browser Streaming on PC appeared first on NVIDIA Blog.

Read More

Easily list and initialize models with new APIs in TorchVision

TorchVision now supports listing and initializing all available built-in models and weights by name. This new API builds upon the recently introduced Multi-weight support API, is currently in Beta, and it addresses a long-standing request from the community.

You can try out the new API in the latest nightly release of TorchVision. We’re looking to collect feedback ahead of finalizing the feature in TorchVision v0.14. We have created a dedicated Github Issue where you can post your comments, questions and suggestions!

Querying and initializing available models

Before the new model registration API, developers had to query the __dict__ attribute of the modules in order to list all available models or to fetch a specific model builder method by its name:

# Initialize a model by its name:
model = torchvision.models.__dict__[model_name]()

# List available models:
available_models = [
    k for k, v in torchvision.models.__dict__.items()
    if callable(v) and k[0].islower() and k[0] != "_"
]

The above approach does not always produce the expected results and is hard to discover. For example, since the get_weight() method is exposed publicly under the same module, it will be included in the list despite not being a model. In general, reducing the verbosity (less imports, shorter names etc) and being able to initialize models and weights directly from their names (better support of configs, TorchHub etc) was feedback provided previously by the community. To solve this problem, we have developed a model registration API.

A new approach

We’ve added 4 new methods under the torchvision.models module:

from torchvision.models import get_model, get_model_weights, get_weight, list_models

The styles and naming conventions align closely with a prototype mechanism proposed by Philip Meier for the Datasets V2 API, aiming to offer a similar user experience. The model registration methods are kept private on purpose as we currently focus only on supporting the built-in models of TorchVision.

List models

Listing all available models in TorchVision can be done with a single function call:

>>> list_models()
['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', 'quantized_mobilenet_v3_large', ...]

To list the available models of specific submodules:

>>> list_models(module=torchvision.models)
['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', ...]
>>> list_models(module=torchvision.models.quantization)
['quantized_mobilenet_v3_large', ...]

Initialize models

Now that you know which models are available, you can easily initialize a model with pre-trained weights:

>>> get_model("quantized_mobilenet_v3_large", weights="DEFAULT")
QuantizableMobileNetV3(
  (features): Sequential(
   ....
   )
)

Get weights

Sometimes, while working with config files or using TorchHub, you might have the name of a specific weight entry and wish to get its instance. This can be easily done with the following method:

>>> get_weight("ResNet50_Weights.IMAGENET1K_V2")
ResNet50_Weights.IMAGENET1K_V2

To get the enum class with all available weights of a specific model you can use either its name:

>>> get_model_weights("quantized_mobilenet_v3_large")
<enum 'MobileNet_V3_Large_QuantizedWeights'>

Or its model builder method:

>>> get_model_weights(torchvision.models.quantization.mobilenet_v3_large)
<enum 'MobileNet_V3_Large_QuantizedWeights'>

TorchHub support

The new methods are also available via TorchHub:

import torch

# Fetching a specific weight entry by its name:
weights = torch.hub.load("pytorch/vision", "get_weight", weights="ResNet50_Weights.IMAGENET1K_V2")

# Fetching the weights enum class to list all available entries:
weight_enum = torch.hub.load("pytorch/vision", "get_model_weights", name="resnet50")
print([weight for weight in weight_enum])

Putting it all together

For example, if you wanted to retrieve all the small-sized models with pre-trained weights and initialize one of them, it’s a matter of using the above APIs:

import torchvision
from torchvision.models import get_model, get_model_weights, list_models


max_params = 5000000

tiny_models = []
for model_name in list_models(module=torchvision.models):
    weights_enum = get_model_weights(model_name)
    if len([w for w in weights_enum if w.meta["num_params"] <= max_params]) > 0:
        tiny_models.append(model_name)

print(tiny_models)
# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]

model = get_model(tiny_models[0], weights="DEFAULT")
print(sum(x.numel() for x in model.state_dict().values()))
# 2239188

For more technical details please see the original RFC. Please spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from beta and including it in the next release. You can do this on the dedicated Github Issue. We are looking forward to reading your comments!

Read More

Discovering when an agent is present in a system

We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?Read More

Discovering when an agent is present in a system

We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?Read More

AWS Localization uses Amazon Translate to scale localization

The AWS website is currently available in 16 languages (12 for the AWS Management Console and for technical documentation): Arabic, Chinese Simplified, Chinese Traditional, English, French, German, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Spanish, Thai, Turkish, and Vietnamese. Customers all over the world gain hands-on experience with the AWS platform, products, and services in their native language. This is made possible thanks to the AWS Localization team (AWSLOC).

AWSLOC manages the end-to-end localization process of digital content at AWS (webpages, consoles, technical documentation, e-books, banners, videos, and more). On average, the team manages 48,000 projects across all digital assets yearly, which amounts to over 3 billion translated words. Given the growing demand of global customers and new local cloud adoption journeys, AWS Localization needs to support content localization at scale, with the aim to make more content available and cater to new markets. To do so, AWSLOC uses a network of over 2,800 linguists globally and supports hundreds of content creators across AWS to scale localization. The team strives to continuously improve the language experience for customers by investing heavily in automation and building automated pipelines for all content types.

AWSLOC aspires to build a future where you can interact with AWS in your preferred language. To achieve this vision, they’re using AWS machine translation and Amazon Translate. The goal is to remove language barriers and make AWS content more accessible through consistent locale-specific experiences to help every AWS creator deliver what matters most to global audiences.

This post describes how AWSLOC uses Amazon Translate to scale localization and offer their services to new locales. Amazon Translate is a neural machine translation service that delivers fast, high-quality, cost-effective, and customizable language translation. Neural machine translation is a form of language translation that uses deep learning models to deliver accurate and natural sounding translation. For more information about the languages Amazon Translate supports, see Supported languages and language codes.

How AWSLOC uses Amazon Translate

The implementation of machine translation allows AWSLOC to speed up the localization process for all types of content. AWSLOC chose AWS technical documentation to jumpstart their machine translation journey with Amazon Translate because it’s one of the pillars of AWS. Around 18% of all customers chose to view technical documentation in their local language in 2021, which is a 27% increase since 2020. In 2020 alone, over 1,435 features and 31 new services were added in technical documentation, which generated an increase of translation volume of 353% in 2021.

To cater to this demand for translated documentation, AWSLOC partnered with Amazon Translate to optimize the localization processes.

Amazon Translate is used to pre-translate the strings that fall below a fuzzy matching threshold (against the translation memory) across 10 supported languages. A dedicated Amazon Translate instance was configured with Active Custom Translation (ACT) and the corresponding parallel data was updated on a monthly basis. In most of the language pairs, the Amazon Translate plus ACT output has shown a positive trend in quality improvement across the board. Furthermore, to raise the bar on quality, a human post-editing process is then performed on assets that have a higher customer visibility. AWSLOC established a governance process to monitor migration of content across machine translation and machine translation post-editing (MTPE), including MTPE-Light and MTPE-Premium. Human editors review MT outputs to correct translation errors, which are incorporated back into the tool via the ACT process. There is a regular engine refresh (once every 40 days on average), the contributions being mostly bug submissions.

AWSLOC follows best practices to maintain the ACT table, which includes marking some terms with the do not translate feature provided by Amazon Translate.

The following diagram illustrates the detailed workflow.

The main components in the process are as follows:

  1. Translation memory – The database that stores sentences, paragraphs, or bullet points that have been previously translated, in order to help human translators. This database stores the source text and its corresponding translation in language pairs, called translation units.
  2. Language quality service (LQS) – The accuracy check that an asset goes through after the Language Service Provider (LSP) completes their pass. 20% of the asset is spot-checked unless otherwise specified.
  3. Parallel data – The method for analyzing data using parallel processes that run simultaneously on multiple containers.
  4. Fuzzy matching – This technique is used in computer-assisted translation as a special case of record linkage. It works with matches that may be less than 100% perfect when finding correspondences between segments of a text and entries in a database of previous translations.
  5. Do-not-translate terms – A list of phrases and words that don’t require translation, such as brand names and trademarks.
  6. Pre-translation – The initial application of do-not-translate terms, translation memory, and machine translation or human translation engines against a source text before it’s presented to linguists.

MTPE-Light produces understandable but not stylistically perfect text. The following table summarizes the differences between MTPE-Light and MTPE-Premium.

MTPE-Light MTPE-Premium
Additions and omissions Punctuation
Accuracy Consistency
Spelling Literalness
Numbers Style
Grammar Preferential terminology
. Formatting errors

Multi-faceted impacts

Amazon Translate is a solution for localization projects at scale. With Amazon Translate, the project turnaround time isn’t tethered to translation volume. Amazon Translate can deliver more than 50,000 words within 1 hour compared to traditional localization cycles, which can complete 10,000-word projects in 7–8 days and 50,000-word projects in 30–35 days. Amazon Translate is also 10 times cheaper than standard translation, and it makes it easier to track and manage the localization budget. Compared to human translation projects that use MTPE-Premium, AWSLOC observed a savings of up to 40%, and a savings of up to 60% for MTPE-Light. Additionally, projects with machine translation exclusively only incur a monthly flat fee—the technology costs for the translation management system AWSLOC uses to process machine translation.

Lastly, thanks to Amazon Translate, AWSLOC is now able to go from monthly to weekly refresh cycles for technical documentation.

All in all, machine translation is the most cost-effective and time-saving option for any global localization team if they want to cater to an increasing amount of content localization in the long term.

Conclusion

The benefits of Amazon Translate are great to Amazon and to our customers, both in exercising savings and delivering localized content faster and in multiple languages. For more information about the capabilities of Amazon Translate, see the Amazon Translate Developer Guide. If you have any questions or feedback, feel free to contact us or leave a comment.


About the authors

Marie-Alice Daniel is a Language Quality Manager at AWS, based in Luxembourg. She leads a variety of efforts to monitor and improve the quality of localized AWS content, especially Marketing content, with a focus on customer social outreach. She also supports stakeholders to address quality concerns and to ensure localized content consistently meets the quality bar.

Ajit Manuel is a Senior Product Manager (Tech) at AWS, based in Seattle. Ajit leads the localization product management team that builds solutions centered around language analytics services, translation automation and language research and design. The solutions that Ajit’s team builds help AWS scale its global footprint while staying locally relevant. Ajit is passionate about building innovative products especially in niche markets and has pioneered solutions that augmented digital transformation within the insurance-tech and media-analytics space.

Read More

Incrementally update a dataset with a bulk import mechanism in Amazon Personalize

We are excited to announce that Amazon Personalize now supports incremental bulk dataset imports; a new option for updating your data and improving the quality of your recommendations. Keeping your datasets current is an important part of maintaining the relevance of your recommendations. Prior to this new feature launch, Amazon Personalize offered two mechanisms for ingesting data:

  • DatasetImportJobDatasetImportJob is a bulk data ingestion mechanism designed to import large datasets into Amazon Personalize. A typical journey starts with importing your historical interactions dataset in addition to your item catalog and user dataset. DatasetImportJob can then be used to keep your datasets current by sending updated records in bulk. Prior to this launch, data ingested via previous import jobs was overwritten by any subsequent DatasetImportJob.
  • Streaming APIs: The streaming APIs (PutEvents, PutUsers, and PutItems) are designed to incrementally update each respective dataset in real-time. For example, after you have trained your model and launched your campaign, your users continue to generate interactions data. This data is then ingested via the PutEvents API, which incrementally updates your interactions dataset. Using the streaming APIs allows you to ingest data as you get it rather than accumulating the data and scheduling ingestion.

With incremental bulk imports, Amazon Personalize simplifies the data ingestion of historical records by enabling you to import incremental changes to your datasets with a DatasetImportJob. You can import 100 GB of data per FULL DatasetImportJob or 1 GB of data per INCREMENTAL DatasetImportJob. Data added to the datasets using INCREMENTAL imports are appended to your existing datasets. Personalize will update records with the current version if your incremental import duplicates any records found in your existing dataset, further simplifying the data ingestion process. In the following sections, we describe the changes to the existing API to support incremental dataset imports.

CreateDatasetImportJob

A new parameter called importMode has been added to the CreateDatasetImportJob API. This parameter is an enum type with two values: FULL and INCREMENTAL. The parameter is optional and is FULL by default to preserve backward compatibility. The CreateDatasetImportJob request is as follows:

{
   "datasetArn": "string",
   "dataSource": { 
      "dataLocation": "string"
   },
   "jobName": "string",
   "roleArn": "string",
   "importMode": {INCREMENTAL, FULL}
}

The Boto3 API is create_dataset_import_job, and the AWS Command Line Interface (AWS CLI) command is create-dataset-import-job.

DescribeDatasetImportJob

The response to DescribeDatasetImportJob has been extended to include whether the import was a full or incremental import. The type of import is indicated in a new importMode field, which is an enum type with two values: FULL and INCREMENTAL. The DescribeDatasetImportJob response is as follows:

{ 
    "datasetImportJob": {
        "creationDateTime": number,
        "datasetArn": "string",
        "datasetImportJobArn": "string",
        "dataSource": {
            "dataLocation": "string"
        },
        "failureReason": "string",
        "jobName": "string",
        "lastUpdatedDateTime": number,
        "roleArn": "string",
        "status": "string",
        "importMode": {INCREMENTAL, FULL}
    }
}

The Boto3 API is describe_dataset_import_job, and the AWS CLI command is describe-dataset-import-job.

ListDatasetImportJob

The response to ListDatasetImportJob has been extended to include whether the import was a full or incremental import. The type of import is indicated in a new importMode field, which is an enum type with two values: FULL and INCREMENTAL. The ListDatasetImportJob response is as follows:

{ 
    "datasetImportJobs": [ { 
        "creationDateTime": number,
        "datasetImportJobArn": "string",
        "failureReason": "string",
        "jobName": "string",
        "lastUpdatedDateTime": number,
        "status": "string",
        "importMode": " {INCREMENTAL, FULL}
    } ],
    "nextToken": "string" 
}

The Boto3 API is list_dataset_import_jobs, and the AWS CLI command is list-dataset-import-jobs.

Code example

The following code shows how to create a dataset import job for incremental bulk import using the SDK for Python (Boto3):

import boto3

personalize = boto3.client('personalize')

response = personalize.create_dataset_import_job(
    jobName = 'YourImportJob',
    datasetArn = 'arn:aws:personalize:us-east 1:111111111111:dataset/AmazonPersonalizeExample/INTERACTIONS',
    dataSource = {'dataLocation':'s3://bucket/file.csv'},
    roleArn = 'role_arn',
    importMode = 'INCREMENTAL'
)

dsij_arn = response['datasetImportJobArn']

print ('Dataset Import Job arn: ' + dsij_arn)

description = personalize.describe_dataset_import_job(
    datasetImportJobArn = dsij_arn)['datasetImportJob']

print('Name: ' + description['jobName'])
print('ARN: ' + description['datasetImportJobArn'])
print('Status: ' + description['status'])

Summary

In this post, we described how you can use this new feature in Amazon Personalize to perform incremental updates to a dataset with bulk import, keeping the data fresh and improving the relevance of Amazon Personalize recommendations. If you have delayed access to your data, incremental bulk import allows you to import your data more easily by appending it to your existing datasets.

Try out this new feature by accessing Amazon Personalize now.


About the authors

Neelam Koshiya is an enterprise solution architect at AWS. Her current focus is to help enterprise customers with their cloud adoption journey for strategic business outcomes. In her spare time, she enjoys reading and being outdoors.

James Jory is a Principal Solutions Architect in Applied AI with AWS. He has a special interest in personalization and recommender systems and a background in ecommerce, marketing technology, and customer data analytics. In his spare time, he enjoys camping and auto racing simulations.

Daniel Foley is a Senior Product Manager for Amazon Personalize. He is focused on building applications that leverage artificial intelligence to solve our customers’ largest challenges. Outside of work, Dan is an avid skier and hiker.

Alex Berlingeri is a Software Development Engineer with Amazon Personalize working on a machine learning powered recommendations service. In his free time he enjoys reading, working out and watching soccer.

Read More

Immunai Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Mapping the immune system could lead to the creation of drugs that help our bodies win the fight against cancer and other diseases. That’s the big idea behind immunotherapy. The problem: the immune system is incredibly complex.

Enter Immunai, a biotech company that’s using cutting-edge genomics & ML technology to map the human immune system and develop new immunotherapies against cancer and autoimmune diseases.

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Luis Voloch, co-founder and CTO of Immunai, about tackling the challenges of the immune system with a machine learning and data science mindset.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species with NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post Immunai Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs appeared first on NVIDIA Blog.

Read More