Existing vision-language models exhibit strong generalization on a variety of visual domains and tasks. However, such models mainly perform zero-shot recognition in a closed-set manner, and thus struggle to handle open-domain visual concepts by design. There are recent finetuning methods, such as prompt learning, that not only study the discrimination between in-distribution (ID) and out-of-distribution (OOD) samples, but also show some improvements in both ID and OOD accuracies. In this paper, we first demonstrate that vision-language models, after long enough finetuning but without proper…Apple Machine Learning Research
Hierarchical and Dynamic Prompt Compression for Efficient Zero-shot API Usage
Long prompts present a significant challenge for practical LLM-based systems that need to operate with low latency and limited resources. We investigate prompt compression for zero-shot dialogue systems that learn to use unseen APIs directly in-context from their documentation, which may take up hundreds of prompt tokens per API. We start from a recently introduced approach (Mu et al., 2023) that learns to compress the prompt into a few “gist token” activations during finetuning. However, this simple idea is ineffective in compressing API documentation, resulting in low accuracy compared to…Apple Machine Learning Research
Introducing OpenAI Japan
We are excited to announce our first office in Asia and we’re releasing a GPT-4 custom model optimized for the Japanese language.OpenAI Blog
International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024
Apple Machine Learning Research
How to Regularize Your Regression
A series of regression instances in a pharmaceutical application. Can we learn how to set the regularization parameter (lambda) from similar domain-specific data?
Overview. Perhaps the simplest relation between a real dependent variable (y) and a vector of features (X) is a linear model (y = beta X). Given some training examples or datapoints consisting of pairs of features and dependent variables ((X_1, y_1),(X_2, y_2),dots,(X_m,y_m)), we would like to learn (beta) which would give the best prediction (y’) given features (X’) of an unseen example. This process of fitting a linear model (beta) to the datapoints is called linear regression. This simple yet effective model finds ubiquitous applications, ranging from biological, behavioral, and social sciences to environmental studies and financial forecasting, to make reliable predictions on future data. In ML terminology, linear regression is a supervised learning algorithm with low variance and good generalization properties. It is much less data-hungry than typical deep learning models, and performs well even with small amounts of training data. Further, to avoid overfitting the model to the training data, which reduces the prediction performance on unseen data, one typically uses regularization, which modifies the objective function of the linear model to reduce impact of outliers and irrelevant features (read on for details).
The most common method for linear regression is “regularized least squares”, where one finds the (beta) which minimizes
$$||y – beta X||_2^2 + lambda ||beta||.$$
Here the first term captures the error of (beta) on the training set, and the second term is a norm-based penalty to avoid overfitting (e.g. reducing impact of outliers in data). How to set (lambda) appropriately in this fundamental method depends on the data domain and is a longstanding open question. In typical modern applications, we have access to several similar datasets (X^{(0)},y^{(0)}, X^{(1)},y^{(1)}, dots) from the same application domain. For example, there are often multiple drug trial studies in a pharmaceutical company for studying the different effects of similar drugs. In this work, we show that we can indeed learn a good domain-specific value of (lambda) with strong theoretical guarantees of accuracy on unseen datasets from the same domain, and give bounds on how much data is needed to achieve this.
As our main result, we show that if the data has (p) features (i.e., the dimension of feature vector (X_i) is (p), then after seeing (O(p/epsilon^2)) datasets, we can learn a value of (lambda) which has error (averaged over the domain) within (epsilon) of the best possible value of (lambda) for the domain. We also extend our results to sequential data, binary classification (i.e. (y) is binary valued) and non-linear regression.
Problem Setup. Linear regression with norm-based regularization penalty is one of the most popular techniques that one encounters in introductory courses to statistics or machine learning. It is widely used for data analysis and feature selection, with numerous applications including medicine, quantitative finance (the linear factor model), climate science, and so on. The regularization penalty is typically a weighted additive term (or terms) of the norms of the learned linear model (beta), where the weight is carefully selected by a domain expert. Mathematically, a dataset has dependent variable (y) consisting of (m) examples, and predictor variables (X) with (p) features for each of the (m) datapoints. The linear regression approach (with squared loss) consists of solving a minimization problem
$$hat{beta}^{X,y}_{lambda_1,lambda_2}=text{argmin}_{betainmathbb{R}^p}||y-Xbeta||^2+lambda_1||beta||_1+lambda_2||beta||_2^2,$$
where the highlighted term is the regularization penalty. Here (lambda_1, lambda_2ge 0) are the regularization coefficients constraining the L1 and L2 norms, respectively, of the learned linear model (beta). For general (lambda_1) and (lambda_2) the above algorithm is popularly known as the Elastic Net, while setting (lambda_1 = 0) recovers Ridge regression and setting (lambda_2 = 0) corresponds to LASSO. Ridge and LASSO regression are both individually popular methods in practice, and the Elastic Net incorporates the advantages of both.
Despite the central role these coefficients play in linear regression, the problem of setting them in a principled way has been a challenging open problem for several decades. In practice, one typically uses “grid search” cross-validation, which involves (1) splitting the dataset into several subsets consisting of training and validation sets, (2) training several models (corresponding to different values of regularization coefficients) on each training set, and (3) comparing the performance of the models on the corresponding validation sets. This approach has several limitations.
- First, this is very computationally intensive, especially with the large datasets that typical modern applications involve, as one needs to train and evaluate the model for a large number of hyperparameter values and training-validation splits. We would like to avoid repeating this cumbersome process for similar applications.
- Second, theoretical guarantees on how well the coefficients learned by this procedure will perform on unseen examples are not known, even when the test data are drawn from the same distribution as the training set.
- Finally, this can only be done for a finite set of hyperparameter values and it is not clear how the selected parameter compares to the best parameter from the continuous domain of coefficients. In particular, the loss as a function of the regularization parameter is not known to be Lipschitz.
Our work addresses all three of the above limitations simultaneously in the data-driven setting, which we motivate and describe next.
The importance of regularization
A visualization of the L1 and L2 regularized regressions.
The regularization coefficients (lambda_1) and (lambda_2) play a crucial role across fields: In machine learning, controlling the norm of model weights (beta) implies provable generalization guarantees and prevents over-fitting in practice. In statistical data analysis, their combined use yields parsimonious and interpretable models. In Bayesian statistics they correspond to imposing specific priors on (beta). Effectively, (lambda_2) regularizes (beta) by uniformly shrinking all coefficients, while (lambda_1) encourages the model vector to be sparse. This means that while they do yield learning-theoretic and statistical benefits, setting them to be too high will cause models to under-fit to the data. The question of how to set the regularization coefficients becomes even more unclear in the case of the Elastic Net, as one must juggle trade-offs between sparsity, feature correlation, and bias when setting both (lambda_1) and (lambda_2) simultaneously.
The data-driven algorithm design paradigm
In many applications, one has access to not just a single dataset, but a large number of similar datasets coming from the same domain. This is increasingly true in the age of big data, where an increasing number of fields are recording and storing data for the purpose of pattern analysis. For example, a drug company typically conducts a large number of trials for a variety of different drugs. Similarly, a climate scientist monitors several different environmental variables and continuously collects new data. In such a scenario, can we exploit the similarity of the datasets to avoid doing cumbersome cross-validation each time we see a new dataset? This motivates the data-driven algorithm design setting, introduced in the theory of computing community by Gupta and Roughgarden as a tool for design and analysis of algorithms that work well on typical datasets from an application domain (as opposed to worst-case analysis). This approach has been successfully applied to several combinatorial problems including clustering, mixed integer programming, automated mechanism design, and graph-based semi-supervised learning (Balcan, 2020). We show how to apply this analytical paradigm to tuning the regularization parameters in linear regression, extending the scope of its application beyond combinatorial problems [1, 2].
The learning model
Formally, we model data coming from the same domain as a fixed (but unknown) distribution (D) over the problem instances. To capture the well-known cross-validation setting, we consider each problem instance of the form (P=(X_{text{train}}, y_{text{train}}, X_{text{val}}, y_{text{val}})). That is, the random process that generates the datasets and the (random or deterministic) process that generates the splits given the data, have been combined under (D). The goal of the learning process is to take (N) problem samples generated from the distribution (D), and learn regularization coefficients (hat{lambda}=(lambda_1, lambda_2)) that would generalize well over unseen problem instances drawn from (D). That is, on an unseen test instance (P’=(X’_{text{train}}, y’_{text{train}}, X’_{text{val}}, y’_{text{val}})), we will fit the model (beta) using the learning regularization coefficients (hat{lambda}) on (X’_{text{train}}, y’_{text{train}}), and evaluate the loss on the set (X’_{text{val}}, y’_{text{val}}). We seek the value of (hat{lambda}) that minimizes this loss, in expectation over the draw of the random test sample from (D).
How much data do we need?
The model (beta) clearly depends on both the dataset ((X,y)), and the regularization coefficients (lambda_1, lambda_2). A key tool in data-driven algorithm design is the analysis of the “dual function”, which is the loss expressed as a function of the parameters, for a fixed problem instance. This is typically easier to analyze than the “primal function” (loss for a fixed parameter, as problem instances are varied) in data-driven algorithm design problems. For Elastic Net regression, the dual is the validation loss on a fixed validation set for models trained with different values of (lambda_1, lambda_2) (i.e. two-parameter function) for a fixed training set. Typically the dual functions in combinatorial problems exhibit a piecewise structure, where the behavior of the loss function can have sharp transitions across the pieces. For example, in clustering this piecewise behavior could correspond to learning a different cluster in each piece. Prior research has shown that if we can bound the complexity of the boundary and piece functions in the dual function, then we can give a sample complexity guarantee, i.e. we can answer the question “how much data is sufficient to learn a good value of the parameter?”
An illustration of the piecewise structure of the Elastic Net dual loss function. Here (r_1) and (r_2) are polynomial boundary functions, and (f_{*,*}) are piece functions which are fixed rational functions given the signs of boundary functions.
Somewhat surprisingly, we show that the dual loss function exhibits a piecewise structure even in linear regression, a classic continuous optimization problem. Intuitively, the pieces correspond to different subsets of the features being “active”, i.e. having non-zero coefficients in the learned model (beta). Specifically, we show that the piece boundaries of the dual function are polynomial functions of bounded degree, and the loss within each piece is a rational function (ratio of two polynomial functions) again of bounded degree. We use this structure to establish a bound on the learning-theoretic complexity of the dual function; more precisely, we bound its pseudo-dimension (a generalization of the VC dimension to real-valued functions).
Theorem. The pseudo-dimension of the Elastic Net dual loss function is (Theta(p)), where (p) is the feature dimension.
(Theta(p)) notation here means we have an upper bound of (O(p)) as well as a lower bound (Omega(p)) on the pseudo-dimension. Roughly speaking, the pseudo-dimension captures the complexity of the function class from a learning perspective, and corresponds to the number of samples needed to guarantee small generalization error (average error on test data). Remarkably, we show an asymptotically tight bound on the pseudo-dimension by establishing a (Omega(p)) lower bound which is technically challenging and needs an explicit construction of a collection of “hard” instances. Tight lower bounds are not known for several typical problems in data-driven algorithm design. Our bound depends only on (p) (the number of features) and is independent of the number of datapoints (m). An immediate consequence of our bound is the following sample complexity guarantee:
Theorem. Given any distribution (D) (fixed, but unknown), we can learn regularization parameters (hat{lambda}) which obtain error within any (epsilon>0) of the best possible parameter with probability (1-delta) using only (O(1/epsilon^2(p+log 1/delta))) problem samples.
One way to understand our results is to instantiate them in the cross-validation setting. Consider the commonly used techniques of leave-one-out cross validation (LOOCV) and Monte Carlo cross validation (repeated random test-validation splits, typically independent and in a fixed proportion). Given a dataset of size (m_{text{tr}}), LOOCV would require (m_{text{tr}}) regression fits which can be computationally expensive for large datasets. Alternately, we can consider draws from a distribution (D_{text{LOO}}) which generates problem instances P from a fixed dataset ((X, y) in R^{m+1times p} times R^{m+1}) by uniformly selecting (j in [m + 1]) and setting (P = (X_{−j∗}, y_{−j} , X_{j∗}, y_j )). Our result now implies that roughly (O(p/epsilon^2)) iterations are enough to determine an Elastic Net parameter (hat{lambda}) with loss within (epsilon) (with high probability) of the parameter (lambda^*) obtained from running the full LOOCV. Similarly, we can define a distribution (D_{text{MC}}) to capture the Monte Carlo cross validation procedure and determine the number of iterations sufficient to get an (epsilon)-approximation of the loss corresponding parameter selection with an arbitrarily large number of runs. Thus, in a very precise sense, our results answer the question of how much cross-validation is enough to effectively implement the above techniques.
Sequential data and online learning
A more challenging variant of the problem assumes that the problem instances arrive sequentially, and we need to set the parameter for each instance using only the previously seen instances. We can think of this as a game between an online adversary and the learner, where the adversary wants to make the sequence of problems as hard as possible. Note that we no longer assume that the problem instances are drawn from a fixed distribution, and this setting allows problem instances to depend on previously seen instances which is typically more realistic (even if there is no actual adversary generating worst-case problem sequences). The learner’s goal is to perform as well as the best fixed parameter in hindsight, and the difference is called the “regret” of the learner.
To obtain positive results, we make a mild assumption on the smoothness of the data: we assume that the prediction values (y) are drawn from a bounded density distribution. This captures a common data pre-processing step of adding a small amount of uniform noise to the data for model stability, e.g. by setting the jitter parameter in the popular Python library scikit-learn. Under this assumption, we show further structure on the dual loss function. Roughly speaking, we show that the location of the piece boundaries of the dual function across the problem instances do not concentrate in a small region of the parameter space.This in turn implies (using Balcan et al., 2018) the existence of an online learner with average expected regret (O(1/sqrt{T})), meaning that we converge to the performance of the best fixed parameter in hindsight as the number of online rounds (T) increases.
Extension to binary classification, including logistic regression
Linear classifiers are also popular for the task of binary classification, where the (y) values are now restricted to (0) or (1). Regularization is also crucial here to learn effective models by avoiding overfitting and selecting important variables. It is particularly common to use logistic regression, where the squared loss above is replaced by the logistic loss function,
$$l_{text{RLR}}(beta,(X,y))=frac{1}{m}sum_{i=1}^mlog(1+exp(-y_ix_i^Tbeta)).$$
The exact loss minimization problem is significantly more challenging in this case, and it is correspondingly difficult to analyze the dual loss function. We overcome this challenge by using a proxy dual function which approximates the true loss function, but has a simpler piecewise structure. Roughly speaking, the proxy function considers a fine parameter grid of width (epsilon) and approximates the loss function at each point on the grid. Furthermore, it is piecewise linear and known to approximate the true loss function to within an error of (O(epsilon^2)) at all points (Rosset, 2004).
Our main result for logistic regression is that the generalization error with (N) samples drawn from the distribution (D) is bounded by (O(sqrt{(m^2+log 1/epsilon)/N}+epsilon^2+sqrt{(log 1/delta)/N})), with (high) probability (1-delta) over the draw of samples. (m) here is the size of the validation set, which is often small or even constant. While this bound is incomparable to the pseudo-dimension-based bounds above, we do not have lower bounds in this setting and tightness of our results in an interesting open question.
Beyond the linear case: kernel regression
So far, we have assumed that the dependent variable (y) has a linear dependence on the predictor variables. While this is a great first thing to try in many applications, very often there is a non-linear relationship between the variables. As a result, linear regression can result in poor performance in some applications. A common alternative is to use Kernelized Least Squares Regression, where the input (X) is implicitly mapped to high (or even infinite) dimensional feature space using the “kernel trick”. As a corollary of our main results, we can show that the pseudo-dimension of the dual loss function in this case is (O(m)), where (m) is the size of the training set in a single problem instance. Our results do not make any assumptions on the (m) samples within a problem instance/dataset; if these samples within problem instances are further assumed to be i.i.d. draws from some data distribution (distinct from problem distribution (D)), then well-known results imply that (m = O(k log p)) samples are sufficient to learn the optimal LASSO coefficient ((k) denotes the number of non-zero coefficients in the optimal regression fit).
Some final remarks
We consider how to tune the norm-based regularization parameters in linear regression. We pin down the learning-theoretic complexity of the loss function, which may be of independent interest. Our results extend to online learning, linear classification, and kernel regression. A key direction for future research is developing an efficient implementation of the algorithms underlying our approach.
More broadly, regularization is a fundamental technique in machine learning, including deep learning where it can take the form of dropout rates, or parameters in the loss function, with significant impact on the performance of the overall algorithm. Our research opens up the exciting question of tuning learnable parameters even in continuous optimization problems. Finally, our research captures an increasingly typical scenario with the advent of the data era, where one has access to repeated instances of data from the same application domain.
For further details about our cool results and the mathematical machinery we used to derive them, check out our papers linked below!
[1] Balcan, M.-F., Khodak, M., Sharma, D., & Talwalkar, A. (2022). Provably tuning the ElasticNet across instances. Advances in Neural Information Processing Systems, 35. [2] Balcan, M.-F., Nguyen, A., & Sharma, D. (2023). New Bounds for Hyperparameter Tuning of Regression Problems Across Instances. Advances in Neural Information Processing Systems, 36.101 real-world gen AI use cases featured at Google Cloud Next ’24
Hundreds of industry leaders joined us at Google Cloud Next ‘24. Here’s a snapshot of how some of them are using AI technologies.Read More
Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model
Organizations across industries want to categorize and extract insights from high volumes of documents of different formats. Manually processing these documents to classify and extract information remains expensive, error prone, and difficult to scale. Advances in generative artificial intelligence (AI) have given rise to intelligent document processing (IDP) solutions that can automate the document classification, and create a cost-effective classification layer capable of handling diverse, unstructured enterprise documents.
Categorizing documents is an important first step in IDP systems. It helps you determine the next set of actions to take depending on the type of document. For example, during the claims adjudication process, the accounts payable team receives the invoice, whereas the claims department manages the contract or policy documents. Traditional rule engines or ML-based classification can classify the documents, but often reach a limit on types of document formats and support for the dynamic addition of a new classes of document. For more information, see Amazon Comprehend document classifier adds layout support for higher accuracy.
In this post, we discuss document classification using the Amazon Titan Multimodal Embeddings model to classify any document types without the need for training.
Amazon Titan Multimodal Embeddings
Amazon recently introduced Titan Multimodal Embeddings in Amazon Bedrock. This model can create embeddings for images and text, enabling the creation of document embeddings to be used in new document classification workflows.
It generates optimized vector representations of documents scanned as images. By encoding both visual and textual components into unified numerical vectors that encapsulate semantic meaning, it enables rapid indexing, powerful contextual search, and accurate classification of documents.
As new document templates and types emerge in business workflows, you can simply invoke the Amazon Bedrock API to dynamically vectorize them and append to their IDP systems to rapidly enhance document classification capabilities.
Solution overview
Let’s examine the following document classification solution with the Amazon Titan Multimodal Embeddings model. For optimal performance, you should customize the solution to your specific use case and existing IDP pipeline setup.
This solution classifies documents using vector embedding semantic search by matching an input document to an already indexed gallery of documents. We use the following key components:
- Embeddings – Embeddings are numerical representations of real-world objects that machine learning (ML) and AI systems use to understand complex knowledge domains like humans do.
- Vector databases – Vector databases are used to store embeddings. Vector databases efficiently index and organize the embeddings, enabling fast retrieval of similar vectors based on distance metrics like Euclidean distance or cosine similarity.
- Semantic search – Semantic search works by considering the context and meaning of the input query and its relevance to the content being searched. Vector embeddings are an effective way to capture and retain the contextual meaning of text and images. In our solution, when an application wants to perform a semantic search, the search document is first converted into an embedding. The vector database with relevant content is then queried to find the most similar embeddings.
In the labeling process, a sample set of business documents like invoices, bank statements, or prescriptions are converted into embeddings using the Amazon Titan Multimodal Embeddings model and stored in a vector database against predefined labels. The Amazon Titan Multimodal Embedding model was trained using the Euclidean L2 algorithm and therefore for best results the vector database used should support this algorithm.
The following architecture diagram illustrates how you can use the Amazon Titan Multimodal Embeddings model with documents in an Amazon Simple Storage Service (Amazon S3) bucket for image gallery creation.
The workflow consists of the following steps:
- A user or application uploads a sample document image with classification metadata to a document image gallery. An S3 prefix or S3 object metadata can be used to classify gallery images.
- An Amazon S3 object notification event invokes the embedding AWS Lambda function.
- The Lambda function reads the document image and translates the image into embeddings by calling Amazon Bedrock and using the Amazon Titan Multimodal Embeddings model.
- Image embeddings, along with document classification, are stored in the vector database.
When a new document needs classification, the same embedding model is used to convert the query document into an embedding. Then, a semantic similarity search is performed on the vector database using the query embedding. The label retrieved against the top embedding match will be the classification label for the query document.
The following architecture diagram illustrates how to use the Amazon Titan Multimodal Embeddings model with documents in an S3 bucket for image classification.
The workflow consists of the following steps:
- Documents that require classification are uploaded to an input S3 bucket.
- The classification Lambda function receives the Amazon S3 object notification.
- The Lambda function translates the image to an embedding by calling the Amazon Bedrock API.
- The vector database is searched for a matching document using semantic search. Classification of the matching document is used to classify the input document.
- The input document is moved to the target S3 directory or prefix using the classification retrieved from the vector database search.
To help you test the solution with your own documents, we have created an example Python Jupyter notebook, which is available on GitHub.
Prerequisites
To run the notebook, you need an AWS account with appropriate AWS Identity and Access Management (IAM) permissions to call Amazon Bedrock. Additionally, on the Model access page of the Amazon Bedrock console, make sure that access is granted for the Amazon Titan Multimodal Embeddings model.
Implementation
In the following steps, replace each user input placeholder with your own information:
- Create the vector database. In this solution, we use an in-memory FAISS database, but you could use an alternative vector database. Amazon Titan’s default dimension size is 1024.
- After the vector database is created, enumerate over the sample documents, creating embeddings of each and store those into the vector database
- Test with your documents. Replace the folders in the following code with your own folders that contain known document types:
- Using the Boto3 library, call Amazon Bedrock. The variable
inputImageB64
is a base64 encoded byte array representing your document. The response from Amazon Bedrock contains the embeddings.
- Add the embeddings to the vector database, with a class ID that represents a known document type:
- With the vector database populated with images (representing our gallery), you can uncover similarities with new documents. For example, the following is the syntax used for search. The k=1 tells FAISS to return the top 1 match.
In addition, the Euclidean L2 distance between the image on hand and the found image is also returned. If the image is an exact match, this value would be 0. The larger this value is, the further apart the images are in similarity.
Additional considerations
In this section, we discuss additional considerations for using the solution effectively. This includes data privacy, security, integration with existing systems, and cost estimates.
Data privacy and security
The AWS shared responsibility model applies to data protection in Amazon Bedrock. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. Customers are responsible for maintaining control over their content that is hosted on this infrastructure. As a customer, you are responsible for the security configuration and management tasks for the AWS services that you use.
Data protection in Amazon Bedrock
Amazon Bedrock avoids using customer prompts and continuations to train AWS models or share them with third parties. Amazon Bedrock doesn’t store or log customer data in its service logs. Model providers don’t have access to Amazon Bedrock logs or access to customer prompts and continuations. As a result, the images used for generating embeddings through the Amazon Titan Multimodal Embeddings model are not stored or employed in training AWS models or external distribution. Additionally, other usage data, such as timestamps and logged account IDs, is excluded from model training.
Integration with existing systems
The Amazon Titan Multimodal Embeddings model underwent training with the Euclidean L2 algorithm, so the vector database being used should be compatible with this algorithm.
Cost estimate
At the time of writing this post, as per Amazon Bedrock Pricing for the Amazon Titan Multimodal Embeddings model, the following are the estimated costs using on-demand pricing for this solution:
- One-time indexing cost – $0.06 for a single run of indexing, assuming a 1,000 images gallery
- Classification cost – $6 for 100,000 input images per month
Clean up
To avoid incurring future charges, delete the resources you created, such as the Amazon SageMaker notebook instance, when not in use.
Conclusion
In this post, we explored how you can use the Amazon Titan Multimodal Embeddings model to build an inexpensive solution for document classification in the IDP workflow. We demonstrated how to create an image gallery of known documents and perform similarity searches with new documents to classify them. We also discussed the benefits of using multimodal image embeddings for document classification, including their ability to handle diverse document types, scalability, and low latency.
As new document templates and types emerge in business workflows, developers can invoke the Amazon Bedrock API to vectorize them dynamically and append to their IDP systems to rapidly enhance document classification capabilities. This creates an inexpensive, infinitely scalable classification layer that can handle even the most diverse, unstructured enterprise documents.
Overall, this post provides a roadmap for building an inexpensive solution for document classification in the IDP workflow using Amazon Titan Multimodal Embeddings.
As next steps, check out What is Amazon Bedrock to start using the service. And follow Amazon Bedrock on the AWS Machine Learning Blog to keep up to date with new capabilities and use cases for Amazon Bedrock.
About the Authors
Sumit Bhati is a Senior Customer Solutions Manager at AWS, specializes in expediting the cloud journey for enterprise customers. Sumit is dedicated to assisting customers through every phase of their cloud adoption, from accelerating migrations to modernizing workloads and facilitating the integration of innovative practices.
David Girling is a Senior AI/ML Solutions Architect with over 20 years of experience in designing, leading, and developing enterprise systems. David is part of a specialist team that focuses on helping customers learn, innovate, and utilize these highly capable services with their data for their use cases.
Ravi Avula is a Senior Solutions Architect in AWS focusing on Enterprise Architecture. Ravi has 20 years of experience in software engineering and has held several leadership roles in software engineering and software architecture working in the payments industry.
George Belsian is a Senior Cloud Application Architect at AWS. He is passionate about helping customers accelerate their modernization and cloud adoption journey. In his current role, George works alongside customer teams to strategize, architect, and develop innovative, scalable solutions.
A quick guide to Amazon’s 20+ papers at ICASSP 2024
This year’s papers address topics such as speech enhancement, spoken-language understanding, dialogue, paralinguistics, and pitch estimation.Read More
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS
AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019.
AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIA’s latest technologies.
Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, chatted with AWS CEO Adam Selipsky on stage, discussing how NVIDIA and AWS are working together to enable millions of developers to access powerful technologies needed to rapidly innovate with generative AI. NVIDIA is known for its cutting-edge accelerators and full-stack solutions that contribute to advancements in AI. The company is combining this expertise with the highly scalable, reliable, and secure AWS Cloud infrastructure to help customers run advanced graphics, machine learning, and generative AI workloads at an accelerated pace.
The collaboration between AWS and NVIDIA further expanded at GTC 2024, with the CEOs from both companies sharing their perspectives on the collaboration and state of AI in a press release:
“The deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers,” says Adam Selipsky, CEO of AWS. “NVIDIA’s next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWS’s powerful Elastic Fabric Adapter networking, Amazon EC2 UltraClusters’ hyper-scale clustering, and our unique AWS Nitro System’s advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.”
“AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries,” says Jensen Huang, founder and CEO of NVIDIA. “Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what’s possible.”
Joint announcements and keynote
On the first day of the NVIDIA GTC, AWS and NVIDIA made a joint announcement focused on their strategic collaboration to advance generative AI. Huang included the AWS and NVIDIA collaboration on a slide during his keynote, highlighting the following announcements. The GTC keynote had over 21 million views within the first 72 hours.
- AWS will offer the new NVIDIA Blackwell platform as Amazon Elastic Compute Cloud (Amazon EC2) instances and NVIDIA DGX Cloud to accelerate performance of building and running inference on multi-trillion parameter large language models (LLMs). Blackwell’s secure AI capabilities integrated with the AWS Nitro System and AWS Key Management Service (AWS KMS) will provide customers end-to-end control of their training data and model weights.
- AWS will provide the cloud infrastructure for Project Ceiba, an AI supercomputer built exclusively on AWS with NVIDIA DGX Cloud, which will feature 20,736 NVIDIA GB200 Grace Blackwell Superchips capable of 414 exaflops for NVIDIA’s own AI R&D.
- The Amazon SageMaker integration with NVIDIA NIM inference microservices will help customers further optimize price-performance of foundation models running on GPUs. (To learn more, see Optimize price-performance of LLM inference on NVIDIA GPUs using the Amazon SageMaker integration with NVIDIA NIM Microservices.)
- AWS HealthOmics with the NVIDIA BioNeMo platform will accelerate generative AI in biology and drug discovery. (To learn more, refer to NVIDIA BioNeMo Expands Computer-Aided Drug Discovery With New Foundation Models, Protein language model training with NVIDIA BioNeMo framework on AWS ParallelCluster, and Find the Next Blockbuster with NVIDIA BioNeMo Framework on Amazon SageMaker.)
- Amazon Robotics and NVIDIA’s long-standing collaboration regarding innovations in advanced simulations was also highlighted.
Media coverage
By March 22, AWS’s announcement with NVIDIA had generated 104 articles mentioning AWS and Amazon. The vast majority of coverage mentioned AWS’s plans to offer Blackwell-based instances. Adam Selipsky appeared on CNBC’s Mad Money to discuss the long-standing collaboration between AWS and NVIDIA, among the many other ways AWS is innovating in generative AI, stating that AWS has been the first to bring many of its GPUs to the cloud to drive efficiency and scalability for customers.
Project Ceiba has also been a focus in media coverage. Forbes referred to Project Ceiba as the “most exciting” project by AWS and NVIDIA, stating that it “should accelerate the pace of innovation in AI, making it possible to tackle more complex problems, develop more sophisticated models, and achieve previously unattainable breakthroughs.” The Next Platform ran an in-depth piece on Ceiba, stating that “the size and the aggregate compute of Ceiba cluster are both being radically expanded, which will give AWS a very large supercomputer in one of its data centers” and NVIDIA will use it to do AI research, among other things.
Live from GTC
“Live from GTC” was an on-site studio at GTC for invited speakers to have a fireside chat with tech influencers like VentureBeat. Chetan Kapoor, Director of Product Management for Amazon EC2 at AWS, was interviewed by VentureBeat at the Live from GTC studio, where he discussed AWS’s presence and highlighted key announcements at GTC.
The AWS booth and sessions
The AWS booth showcased generative AI services, like the LLMs with Anthropic and Cohere on Amazon Bedrock, PartyRock, Amazon Q, Amazon SageMaker JumpStart, and more. Highlights included:
- AWS AI Chess Robots – Two robotic arms playing chess against each other, with each move generated in the cloud with LLMs on Amazon Bedrock and powered by the NVIDIA Jetson platform and NVIDIA GPUs
- Wormhole – An alien robot from Media.Monks, who was busy having intelligent conversations with booth visitors powered by NVIDIA and a serverless Retrieval Augmented Generation (RAG) model using Claude 3 on Amazon Bedrock, along with other AWS services – Including SageMaker, Amazon Polly, and more
Additionally, AWS had 10 GTC sessions showcasing how the latest technologies from AWS and NVIDIA can drive business outcomes using generative AI. Some highlights include: - How Genius Sports Transforms NFL Game Viewing with Accelerated Computing on AWS (Presented by Amazon Web Services)
- Accelerate Time to Train Your Largest Generative AI Models With SageMaker HyperPod (Presented by Amazon Web Services)
AWS presence with partners and customers
During GTC, AWS invited 23 partner and customer solution demos to join its booth with either a dedicated demo kiosk or a 30-minute in-booth session. Such partners and customers included Ansys, Anthropic, Articul8, Bria.ai, Cohere, Deci, Deepbrain.AI, Denali Advanced Integration, Ganit, Hugging Face, Lilt, Linker Vision, Mavenir, MCE, Media.Monks, Modular, NVIDIA, Perplexity, Quantiphi, Run.ai, Salesforce, Second Spectrum, and Slalom.
Among them, high-potential early-stage startups in generative AI across the globe were showcased with a dedicated kiosk at the AWS booth. The AWS Startups team works closely with these companies by investing and supporting their growth, offering resources through programs like AWS Activate.
AWS Generative AI Competency
NVIDIA was one of the 45 launch partners for the new AWS Generative AI Competency program. The Generative AI Center of Excellence for AWS Partners team members were on site at the AWS booth, presenting this program for both existing and potential AWS partners. The program offers valuable resources along with best practices for all AWS partners to build, market, and sell generative AI solutions jointly with AWS.
Additional resources
Watch a video recap of the AWS presence at NVIDIA GTC 2024. For additional resources about the AWS and NVIDIA collaboration, refer to the AWS at NVIDIA GTC 2024 resource hub.
About the Author
Julie Tang is the Senior Global Partner Marketing Manager for Generative AI at Amazon Web Services (AWS), where she collaborates closely with NVIDIA to plan and execute partner marketing initiatives focused on generative AI. Throughout her tenure at AWS, she has held various partner marketing roles, including Global IoT Solutions, AWS Partner Solution Factory, and Sr. Campaign Manager in Americas Field Marketing. Prior to AWS, Julie served as the Marketing Director at Segway. She holds a Master’s degree in Communications Management with a focus on marketing and entertainment management from the University of Southern California, and dual Bachelor’s degrees in Law and Broadcast Journalism from Fudan University.
6 new conversations with global leaders on AI and society
Our new limited-series podcast explores AI’s impact on policy, the economy, science, democracy, sustainability, and more.Read More