The post Online math tutoring service uses AI to help boost students’ skills and confidence appeared first on The AI Blog.
Grand Entrance: Human Horizons Unveils Smart GT Built on NVIDIA DRIVE Orin
Tourer vehicles just became a little more grand.
Electric vehicle maker Human Horizons provided a detailed glimpse earlier this month of its latest production model: the GT HiPhi Z. The intelligent EV is poised to redefine the grand tourer vehicle category with innovative, software-defined capabilities that bring luxurious cruising to the next level.
The vehicle’s marquis features include an in-vehicle AI assistant and autonomous driving system powered by NVIDIA DRIVE Orin.
The GT badge first appeared on vehicles in the mid-20th century, combining smooth performance with a roomy interior for longer joy rides. Since then, the segment has diversified, with varied takes on horsepower and body design.
The HiPhi Z further iterates on the vehicle type, emphasizing smart performance and a convenient, comfortable in-cabin experience.
Smooth Sailing
An EV designed to be driven, the GT HiPhi Z also incorporates robust advanced driver assistance features that can give humans a break on longer trips.
The HiPhi Pilot ADAS platform provides dual redundancy for computing, perception, communication, braking, steering and power supply. It uses the high-performance AI compute of NVIDIA DRIVE Orin and 34 sensors to perform assisted driving and parking, as well as smart summon.
DRIVE Orin is designed to handle the large number of applications and deep neural networks running simultaneously for autonomous driving capabilities. It’s architected to achieve systematic safety standards such as the ISO 26262 ASIL-D.
With this high level of performance at its core, the HiPhi Pilot system delivers seamless automated features that remove the stress from driving.
Intelligent Interior
Staying true to its GT DNA, the HiPhi Z sports a luxurious interior that delivers effortless comfort for both the driver and passengers.
The cabin includes suede bucket seats, ambient panel lights and a 23-speaker audio system for an immersive sensory environment.
It’s also intelligent, with the HiPhi Bot AI companion that can automatically adjust aspects of the vehicle experience. The AI assistant uses a vehicle-grade, adjustable, high-speed motion robotic arm to interact with passengers. It can move back and forth in less than a second, with control accuracy of up to 0.001 millimeters, performing a variety of delicate movements seamlessly.
The GT HiPhi Z is currently on display in Shenzen, China, and will tour nearly a dozen other cities. Human Horizons plans to release details of the full launch at the Chengdu Auto Show in August.
The post Grand Entrance: Human Horizons Unveils Smart GT Built on NVIDIA DRIVE Orin appeared first on NVIDIA Blog.
Artificial intelligence model finds potential drug molecules a thousand times faster
The entirety of the known universe is teeming with an infinite number of molecules. But what fraction of these molecules have potential drug-like traits that can be used to develop life-saving drug treatments? Millions? Billions? Trillions? The answer: novemdecillion, or 1060. This gargantuan number prolongs the drug development process for fast-spreading diseases like Covid-19 because it is far beyond what existing drug design models can compute. To put it into perspective, the Milky Way has about 100 thousand million, or 108, stars.
In a paper that will be presented at the International Conference on Machine Learning (ICML), MIT researchers developed a geometric deep-learning model called EquiBind that is 1,200 times faster than one of the fastest existing computational molecular docking models, QuickVina2-W, in successfully binding drug-like molecules to proteins. EquiBind is based on its predecessor, EquiDock, which specializes in binding two proteins using a technique developed by the late Octavian-Eugen Ganea, a recent MIT Computer Science and Artificial Intelligence Laboratory and Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) postdoc, who also co-authored the EquiBind paper.
Before drug development can even take place, drug researchers must find promising drug-like molecules that can bind or “dock” properly onto certain protein targets in a process known as drug discovery. After successfully docking to the protein, the binding drug, also known as the ligand, can stop a protein from functioning. If this happens to an essential protein of a bacterium, it can kill the bacterium, conferring protection to the human body.
However, the process of drug discovery can be costly both financially and computationally, with billions of dollars poured into the process and over a decade of development and testing before final approval from the Food and Drug Administration. What’s more, 90 percent of all drugs fail once they are tested in humans due to having no effects or too many side effects. One of the ways drug companies recoup the costs of these failures is by raising the prices of the drugs that are successful.
The current computational process for finding promising drug candidate molecules goes like this: most state-of-the-art computational models rely upon heavy candidate sampling coupled with methods like scoring, ranking, and fine-tuning to get the best “fit” between the ligand and the protein.
Hannes Stärk, a first-year graduate student at the MIT Department of Electrical Engineering and Computer Science and lead author of the paper, likens typical ligand-to-protein binding methodologies to “trying to fit a key into a lock with a lot of keyholes.” Typical models time-consumingly score each “fit” before choosing the best one. In contrast, EquiBind directly predicts the precise key location in a single step without prior knowledge of the protein’s target pocket, which is known as “blind docking.”
Unlike most models that require several attempts to find a favorable position for the ligand in the protein, EquiBind already has built-in geometric reasoning that helps the model learn the underlying physics of molecules and successfully generalize to make better predictions when encountering new, unseen data.
The release of these findings quickly attracted the attention of industry professionals, including Pat Walters, the chief data officer for Relay Therapeutics. Walters suggested that the team try their model on an already existing drug and protein used for lung cancer, leukemia, and gastrointestinal tumors. Whereas most of the traditional docking methods failed to successfully bind the ligands that worked on those proteins, EquiBind succeeded.
“EquiBind provides a unique solution to the docking problem that incorporates both pose prediction and binding site identification,” Walters says. “This approach, which leverages information from thousands of publicly available crystal structures, has the potential to impact the field in new ways.”
“We were amazed that while all other methods got it completely wrong or only got one correct, EquiBind was able to put it into the correct pocket, so we were very happy to see the results for this,” Stärk says.
While EquiBind has received a great deal of feedback from industry professionals that has helped the team consider practical uses for the computational model, Stärk hopes to find different perspectives at the upcoming ICML in July.
“The feedback I’m most looking forward to is suggestions on how to further improve the model,” he says. “I want to discuss with those researchers … to tell them what I think can be the next steps and encourage them to go ahead and use the model for their own papers and for their own methods … we’ve had many researchers already reaching out and asking if we think the model could be useful for their problem.”
This work was funded, in part, by the Pharmaceutical Discovery and Synthesis consortium; the Jameel Clinic; the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program; the DARPA Accelerated Molecular Discovery program; the MIT-Takeda Fellowship; and the NSF Expeditions grant Collaborative Research: Understanding the World Through Code.
This work is dedicated to the memory of Octavian-Eugen Ganea, who made crucial contributions to geometric machine learning research and generously mentored many students — a brilliant scholar with a humble soul.
Revisiting Mask Transformer from a Clustering Perspective
Panoptic segmentation is a computer vision problem that serves as a core task for many real-world applications. Due to its complexity, previous work often divides panoptic segmentation into semantic segmentation (assigning semantic labels, such as “person” and “sky”, to every pixel in an image) and instance segmentation (identifying and segmenting only countable objects, such as “pedestrians” and “cars”, in an image), and further divides it into several sub-tasks. Each sub-task is processed individually, and extra modules are applied to merge the results from each sub-task stage. This process is not only complex, but it also introduces many hand-designed priors when processing sub-tasks and when combining the results from different sub-task stages.
Recently, inspired by Transformer and DETR, an end-to-end solution for panoptic segmentation with mask transformers (an extension of the Transformer architecture that is used to generate segmentation masks) was proposed in MaX-DeepLab. This solution adopts a pixel path (consisting of either convolutional neural networks or vision transformers) to extract pixel features, a memory path (consisting of transformer decoder modules) to extract memory features, and a dual-path transformer for interaction between pixel features and memory features. However, the dual-path transformer, which utilizes cross-attention, was originally designed for language tasks, where the input sequence consists of dozens or hundreds of words. Nonetheless, when it comes to vision tasks, specifically segmentation problems, the input sequence consists of tens of thousands of pixels, which not only indicates a much larger magnitude of input scale, but also represents a lower-level embedding compared to language words.
In “CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation”, presented at CVPR 2022, and “kMaX-DeepLab: k-means Mask Transformer”, to be presented at ECCV 2022, we propose to reinterpret and redesign cross-attention from a clustering perspective (i.e., grouping pixels with the same semantic labels together), which better adapts to vision tasks. CMT-DeepLab is built upon the previous state-of-the-art method, MaX-DeepLab, and employs a pixel clustering approach to perform cross-attention, leading to a more dense and plausible attention map. kMaX-DeepLab further redesigns cross-attention to be more like a k-means clustering algorithm, with a simple change on the activation function. We demonstrate that CMT-DeepLab achieves significant performance improvements, while kMaX-DeepLab not only simplifies the modification but also further pushes the state-of-the-art by a large margin, without test-time augmentation. We are also excited to announce the open-source release of kMaX-DeepLab, our best performing segmentation model, in the DeepLab2 library.
Overview
Instead of directly applying cross-attention to vision tasks without modifications, we propose to reinterpret it from a clustering perspective. Specifically, we note that the mask Transformer object query can be considered cluster centers (which aim to group pixels with the same semantic labels), and the process of cross-attention is similar to the k-means clustering algorithm, which adopts an iterative process of (1) assigning pixels to cluster centers, where multiple pixels can be assigned to a single cluster center, and some cluster centers may have no assigned pixels, and (2) updating the cluster centers by averaging pixels assigned to the same cluster center, the cluster centers will not be updated if no pixel is assigned to them).
In CMT-DeepLab and kMaX-DeepLab, we reformulate the cross-attention from the clustering perspective, which consists of iterative cluster-assignment and cluster-update steps. |
Given the popularity of the k-means clustering algorithm, in CMT-DeepLab we redesign cross-attention so that the spatial-wise softmax operation (i.e., the softmax operation that is applied along the image spatial resolution) that in effect assigns cluster centers to pixels is instead applied along the cluster centers. In kMaX-DeepLab, we further simplify the spatial-wise softmax to cluster-wise argmax (i.e., applying the argmax operation along the cluster centers). We note that the argmax operation is the same as the hard assignment (i.e., a pixel is assigned to only one cluster) used in the k-means clustering algorithm.
Reformulating the cross-attention of the mask transformer from the clustering perspective significantly improves the segmentation performance and simplifies the complex mask transformer pipeline to be more interpretable. First, pixel features are extracted from the input image with an encoder-decoder structure. Then, a set of cluster centers are used to group pixels, which are further updated based on the clustering assignments. Finally, the clustering assignment and update steps are iteratively performed, with the last assignment directly serving as segmentation predictions.
The meta architecture of our proposed kMaX-DeepLab consists of three components: pixel encoder, enhanced pixel decoder, and kMaX decoder. The pixel encoder is any network backbone, used to extract image features. The enhanced pixel decoder includes transformer encoders to enhance the pixel features, and upsampling layers to generate higher resolution features. The series of kMaX decoders transform cluster centers into (1) mask embedding vectors, which multiply with the pixel features to generate the predicted masks, and (2) class predictions for each mask.
The meta architecture of kMaX-DeepLab. |
Results
We evaluate the CMT-DeepLab and kMaX-DeepLab using the panoptic quality (PQ) metric on two of the most challenging panoptic segmentation datasets, COCO and Cityscapes, against MaX-DeepLab and other state-of-the-art methods. CMT-DeepLab achieves significant performance improvement, while kMaX-DeepLab not only simplifies the modification but also further pushes the state-of-the-art by a large margin, with 58.0% PQ on COCO val set, and 68.4% PQ, 44.0% mask Average Precision (mask AP), 83.5% mean Intersection-over-Union (mIoU) on Cityscapes val set, without test-time augmentation or using an external dataset.
Method | PQ |
MaX-DeepLab | 51.1% (-6.9%) |
MaskFormer | 52.7% (-5.3%) |
K-Net | 54.6% (-3.4%) |
CMT-DeepLab | 55.3% (-2.7%) |
kMaX-DeepLab | 58.0% |
Comparison on COCO val set. |
Method | PQ | APmask | mIoU |
Panoptic-DeepLab | 63.0% (-5.4%) | 35.3% (-8.7%) | 80.5% (-3.0%) |
Axial-DeepLab | 64.4% (-4.0%) | 36.7% (-7.3%) | 80.6% (-2.9%) |
SWideRNet | 66.4% (-2.0%) | 40.1% (-3.9%) | 82.2% (-1.3%) |
kMaX-DeepLab | 68.4% | 44.0% | 83.5% |
Comparison on Cityscapes val set. |
Designed from a clustering perspective, kMaX-DeepLab not only has a higher performance but also a more plausible visualization of the attention map to understand its working mechanism. In the example below, kMaX-DeepLab iteratively performs clustering assignments and updates, which gradually improves mask quality.
kMaX-DeepLab’s attention map can be directly visualized as a panoptic segmentation, which gives better plausibility for the model working mechanism (image credit: coco_url, and license). |
Conclusions
We have demonstrated a way to better design mask transformers for vision tasks. With simple modifications, CMT-DeepLab and kMaX-DeepLab reformulate cross-attention to be more like a clustering algorithm. As a result, the proposed models achieve state-of-the-art performance on the challenging COCO and Cityscapes datasets. We hope that the open-source release of kMaX-DeepLab in the DeepLab2 library will facilitate future research on designing vision-specific transformer architectures.
Acknowledgements
We are thankful to the valuable discussion and support from Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Florian Schroff, Hartwig Adam, and Alan Yuille.
Merge Ahead: Researcher Takes Software Bridge to Quantum Computing
Kristel Michielsen was into quantum computing before quantum computing was cool.
The computational physicist simulated quantum computers as part of her Ph.D. work in the Netherlands in the early 1990s.
Today, she manages one of Europe’s largest facilities for quantum computing, the Jülich Unified Infrastructure for Quantum Computing (JUNIQ) . Her mission is to help developers pioneer this new realm with tools like NVIDIA Quantum Optimized Device Architecture (QODA).
“This helps bring quantum computing closer to the HPC and AI communities.” -Kristel Michielsen
“We can’t go on with today’s classical computers alone because they consume so much energy, and they can’t solve some problems,” said Michielsen, who leads the quantum program at the Jülich Supercomputing Center near Cologne. “But paired with quantum computers that won’t consume as much energy, I believe there may be the potential to solve some of our most complex problems.”
Enter the QPU
Because quantum processors, or QPUs, harness the properties of quantum mechanics, they’re ideally suited to simulating processes at the atomic level. That could enable fundamental advances in chemistry and materials science, starting domino effects in everything from more efficient batteries to more effective drugs.
QPUs may also help with thorny optimization problems in fields like logistics. For example, airlines face daily challenges figuring out which planes to assign to which routes.
In one experiment, a quantum computer recently installed at Jülich showed the most efficient way to route nearly 500 flights — demonstrating the technology’s potential.
Quantum computing also promises to take AI to the next level. In separate experiments, Jülich researchers used quantum machine learning to simulate how proteins bind to DNA strands and classify satellite images of Lyon, France.
Hybrids Take Best of Both Worlds
Several prototype quantum computers are now available, but none is powerful or dependable enough to tackle commercially relevant jobs yet. But researchers see a way forward.
“For a long time, we’ve had a vision of hybrid systems as the only way to get practical quantum computing — linked to today’s classical HPC systems, quantum computers will give us the best of both worlds,” Michielsen said.
And that’s just what Jülich and other researchers around the world are building today.
Quantum Gets 49x Boost on A100 GPUs
In addition to its current analog quantum system, Jülich plans next year to install a neutral atom quantum computer from Paris-based Pasqal. It’s also been running quantum simulations on classical systems such as its JUWELS Booster, which packs over 3,700 NVIDIA A100 Tensor Core GPUs.
“The GPU version of our universal quantum-computer simulator, called JUQCS, has given us up to 49x speedups compared to jobs running on CPU clusters — this work uses almost all the system’s GPU nodes and relies heavily on its InfiniBand network,” she said, citing a recent paper.
Recently, classical systems like the JUWELS Booster use NVIDIA cuQuantum, a software development kit for accelerating quantum jobs on GPUs. “For us, it’s great for cross-platform benchmarking, and for others it could be a great tool to start or optimize their quantum simulation codes,” Michielsen said of the SDK.
Hybrid Systems, Hybrid Software
With multiple HPC and quantum systems on hand and more on the way for Jülich and other research centers, one of the challenges is tying it all together.
“The HPC community needs to look in detail at applications that span everything from climate science and medicine to chemistry and physics to see what parts of the code can run on quantum systems,” she said.
It’s a Herculean task for developers entering the quantum computing era, but help’s on the way.
NVIDIA QODA acts like a software bridge. With a function call, developers can choose to run their quantum jobs on GPUs or quantum processors.
QODA’s high-level language will support every kind of quantum computer, and its compiler will be available as open-source software. And it’s supported by quantum system and software providers including Pasqal, Xanadu, QC Ware and Zapata.
Quantum Leap for HPC, AI Developers
Michielsen foresees JUNIQ providing QODA to researchers across Europe who use its quantum services.
“This helps bring quantum computing closer to the HPC and AI communities,” she said. “It will speed up how they get things done without them needing to do all the low-level programming, so it makes their life much easier.”
Michielsen expects many researchers will be using QODA to try out hybrid quantum-classical computers — over the coming year and beyond.
“Who knows, maybe one of our users will pioneer a new example of real-world hybrid computing,” she said.
Image at top courtesy of Forschungszentrum Jülich / Ralf-Uwe Limbach
The post Merge Ahead: Researcher Takes Software Bridge to Quantum Computing appeared first on NVIDIA Blog.
Machine Learning University expands with MLU Explains
Fun visual essays explain key concepts of machine learning.Read More
Sequences That Stun: Visual Effects Artist Surfaced Studio Arrives ‘In the NVIDIA Studio’
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.
Visual effects savant Surfaced Studio steps In the NVIDIA Studio this week to share his clever film sequences, Fluid Simulation and Destruction, as well as his creative workflows.
These sequences feature quirky visual effects that Surfaced Studio is renowned for demonstrating on his YouTube channel.
Surfaced Studio’s successful tutorial style, dubbed “edutainment,” features his wonderfully light-hearted personality — providing vital techniques and creative insights which lead to fun, memorable learning for his subscribers.
“I’ve come to accept my nerdiness — how my lame humor litters my tutorials — and I’ve been incredibly fortunate that I ended up finding a community of like-minded people who are happy to learn alongside my own attempts of making art,” Surfaced Studio mused.
The Liquification Situation
To create the Fluid Simulation sequence, Surfaced Studio began in Adobe After Effects by combining two video clips: one of him pretending to be hit by a fluid wave, and another of his friend Jimmy running towards him. Jimmy magically disappears because of the masked layer that Surfaced Studio applied.
He then rendered the clip and imported it into Blender. This served as a reference to match the 3D scene geometry with the fluid simulation.
Surfaced Studio then selected the Mantaflow fluid feature, tweaking parameters to create the fluid simulation. For a beginner’s look at fluid simulation techniques, check out his tutorial, FLUID SIMULATIONS in Blender 2.9 with Mantaflow. This feature, accelerated by his GeForce RTX 2070 Laptop GPU, bakes simulations faster than with a CPU alone.
To capture a collision with accurate, realistic physics, Surfaced Studio set up rigid body objects, creating the physical geometry for the fluid to collide with. The Jimmy character was marked with the Use Flow property to emit the fluid at the exact moment of the collision.
“It’s hard not to recommend NVIDIA GPUs for anyone wanting to explore the creative space, and I’ve been using them for well over a decade now,” said Surfaced Studio.
Surfaced Studio also enabled speed vectors to implement motion blur effects directly on the fluid simulation, adding further realism to the short.
His entire 3D creative workflow in Blender was accelerated by the RTX 2070 Laptop GPU: the fluid simulation, motion blur effects, animations and mesh generation. Blender Cycles RTX-accelerated OptiX ray tracing unlocked quick interactive modeling in the viewport and lightning-fast final renders. Surfaced Studio said his GPU saved him countless hours to reinvest in his creativity.
Surfaced Studio reached the composite stage in After Effects, applying the GPU-accelerated Curves effect to the water, shaping and illuminating it to his liking.
He then used the Boris FX Mocha AE plugin to rotoscope Jimmy — or create sequences by tracing over live-action footage frame by frame — to animate the character. This can be a lengthy process, but the GPU-accelerated plugin completed the task in mere moments.
Color touchups were applied with the Hue/Saturation, Brightness and Color Balance features, which are also GPU accelerated.
Finally, Surfaced Studio used the GPU-accelerated NVENC encoder to rapidly export his final video files.
For a deeper dive into Surfaced Studio’s process, watch his tutorial: Add 3D Fluid Simulations to Videos w/ Blender & After Effects.
“A lot of the third-party plugins that I use regularly, including Boris FX Mocha Pro, Continuum, Sapphire, Video Copilot Element 3D and Red Giant, all benefit heavily from GPU acceleration,” the artist said.
His GeForce RTX 2070 Laptop GPU worked overtime with this project — but the Fluid Simulation sequence only scratches the surface(d) of the artist’s skills.
Fire in the Hole!
Surfaced Studio built the short sequence Destruction following a similar creative workflow to Fluid Simulation. 3D scenes in Blender complemented video footage composited in After Effects, with realistic physics applied.
Destruction in Blender for Absolute Beginners covers the basics of how to break objects in Blender, add realistic physics to objects, calculate physics weight for fragments, and animate entire scenes.
3D Destruction Effects in Blender & After Effects offers tips and tricks for further compositing in After Effects, placing 2D stock footage in 3D elements, final color grading and camera-shaking techniques.
These tools set the foundation for aspiring 3D artists to build their own destructive scenes — and the “edutainment” is highly recommended viewing.
Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.
The post Sequences That Stun: Visual Effects Artist Surfaced Studio Arrives ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.
AI on the Sky: Stunning New Images from the James Webb Space Telescope to be Analyzed by, Train, AI
The release by U.S. President Joe Biden Monday of the first full-color image from the James Webb Space Telescope is already astounding — and delighting — humans around the globe.
“We can see possibilities nobody has ever seen before, we can go places nobody has ever gone before,” Biden said during a White House press event. “These images are going to remind the world that America can do big things.”
But humans aren’t the only audience for these images. Data from what Biden described as the “miraculous telescope” are also being soaked up by a new generation of GPU-accelerated AI created at UC Santa Cruz.
And Morpheus, as the team at UC Santa Cruz has dubbed the AI, won’t just be helping humans make sense of what we’re seeing. It will also use images from the $10 billion space telescope to better understand what it’s looking for.
The image released Monday represents the deepest and sharpest infrared images of the distant universe to date. Dubbed “Webb’s First Deep Field,” the image of galaxy cluster SMACS 0723 is overflowing with detail.
Answering Questions
NASA reported that the thousands of galaxies — including the faintest objects ever observed in the infrared — have appeared in Webb’s view for the first time.
And Monday’s image represents just a tiny piece of what’s out there, with the image covering a patch of sky roughly the size of a grain of sand held at arm’s length by someone on the ground, explained NASA Administrator Bill Nelson.
The telescope’s iconic array of 18 interlocking hexagonal mirrors, which span a total of 21 feet 4 inches, are peering far deeper into the universe and deeper into the universe’s past than any tool to date.
“When you look at something as big as this we are going to be able to answer questions that we don’t even know what the questions are yet,” Nelson said.
Strange New Worlds
The telescope won’t just see back further in time than any scientific instrument — almost to the beginning of the universe — it may also help us see if planets outside our solar system are habitable, Nelson said.
Morpheus — which played a key role in helping scientists understand images taken on NASA’s Hubble Space Telescope — will help scientists ask, and answer, these questions, by analyzing images that are further away and from phenomena that are deeper back in time than before.
Working with Ryan Hausen, a Ph.D. student in UC Santa Cruz’s computer science department, Robertson helped create a deep learning framework that classifies astronomical objects, such as galaxies, based on the raw data streaming out of telescopes on a pixel-by-pixel basis.
“The JWST will really enable us to see the universe in a new way that we’ve never seen before,” said UC Santa Cruz Astronomy and Astrophysics Professor Brant Robertson. “So it’s really exciting.”
Eventually, Morpheus will also be using the images to learn, too. Not only are the JWST’s optics unique, but JWST will also be collecting light galaxies that are further away — and thus redder — than were visible on the Hubble.
Morpheus is trained on UC Santa Cruz’s Lux supercomputer. The machine includes 28 GPU nodes with two NVIDIA V100 GPUs each.
In other words, while we’ll all feasting our eyes on these images for years to come, scientists will be feeding data from the JWST to AI.
Tune in: NASA and its partners will release the full series of Webb’s first full-color images and data, known as spectra, Tuesday, July 12, during a live NASA TV broadcast.
The post AI on the Sky: Stunning New Images from the James Webb Space Telescope to be Analyzed by, Train, AI appeared first on NVIDIA Blog.
Optimal Algorithms for Mean Estimation under Local Differential Privacy
We study the problem of mean estimation of -bounded vectors under the constraint of local differential privacy. While the literature has a variety of algorithms that achieve the asymptotically optimal rates for this problem, the performance of these algorithms in practice can vary significantly due to varying (and often large) hidden constants. In this work, we investigate the question of designing the protocol with the smallest variance. We show that PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal variance among a large family of locally private randomizers. To…Apple Machine Learning Research
Private Frequency Estimation via Projective Geometry
In this work, we propose a new algorithm ProjectiveGeometryResponse (PGR) for locally differentially private (LDP) frequency estimation. For a universe size of and with users, our -LDP algorithm has communication cost bits in the private coin setting and in the public coin setting, and has computation cost for the server to approximately reconstruct the frequency histogram, while achieving the state-of-the-art privacy-utility tradeoff. In many parameter settings used in practice this is a significant improvement over the $ O(n+k^2)$ computation cost that is achieved by the recent…Apple Machine Learning Research