top of page

AWS Bedrock’s Claude-2-100k vs. Azure OpenAI’s GPT-4-32k: A Comparative Analysis

Recently in AI news, two models have been garnering significant interest: OpenAI's GPT-4 and Anthropic's Claude-2. Notably, GPT-4's latest version features a 32,000-token capacity, while the newest version of Claude boasts a remarkable 100,000-token limit.


Anthropic and Amazon are united in their commitment to the safe application of advanced foundational models during both training and deployment. Amazon's generative AI service, known as Bedrock, includes a component called Claude, with Claude-2 being Anthropic's most recent release.

Various companies across diverse sectors are utilizing Anthropic's models in conjunction with Amazon Bedrock for their project development. These models are accessible on the AWS platform through an API, enabling businesses to more efficiently develop and enhance AI-powered applications.

In earlier articles, we explored the capabilities of the Claude model, including its proficiency in solving complex mathematical problems, its geographical knowledge, and its effectiveness in understanding and performing sentiment analysis, among other things.

AWS Bedrock

AWS Bedrock offers a service that provides access to Foundation Models through APIs, facilitating easy utilization of top AI start-up model providers' Foundation Models.

> Foundation model: A Foundation model refers to Generative AI driven by Large Language Models (LLMs), which are pre-trained on extensive datasets.

With AWS Bedrock, businesses can select an appropriate model for their needs without the hassle of infrastructure setup and management. This service allows for choosing a foundational model and further tailoring it with specific data securely.

Data used in this process is encrypted and retained within the user's Amazon Virtual Private Cloud (VPC), ensuring confidentiality.

Chinese Test Cases

* (Tests are run on each official website)

Comparing the Chinese response capabilities of Claude2 and ChatGPT4, covering the following five areas:

(1) Simple Question-Answer Dialogue

GPT4's responses were more precise and concise, while Claude2 tended to provide more comprehensive information when answering.

(2) Handling Complex Problems (Brainstorming)

When tasked with creating a Spanish learning plan, Claude2 did not break down the plan into weekly segments, whereas GPT4 focused more on logic, providing a plan that was more useful as a reference.

(3) Office Tasks (Writing Emails)

Claude2 performed better in composing Chinese emails, whereas GPT4 sometimes appeared unnatural in its Chinese expression.

(4) Summary and Analysis

Claude2 showed superior performance in analyzing PDF documents, effectively extracting and summarizing the main content of the documents, while GPT4 displayed limitations in processing non-text-based PDFs.

(5) AI Creativity

In creative tasks, such as writing acrostic poetry, GPT4 excelled by correctly embedding specific words into the verses, a task Claude2 did not achieve.

Conclusion: Claude2 and GPT4 both showed commendable performance across various tasks, each with its strengths. Claude2 was more effective in summary analysis and certain creative tasks, while GPT4 stood out in handling complex problems and creative expression.

English Test Cases

* (Tests are run with Azure Open AI Studio Interface)

(1) Conversion of Code Between Java and Python

Our assessment indicates that Claude-2-100k on AWS Bedrock and GPT-4-32k from Azure OpenAI display similar proficiencies in code conversion, though their approaches differ.

Java -> Python
  • Claude-2-100k was able to transform 409 lines of Java code into 19 lines of Python code before it unexpectedly stopped. This might imply that Claude-2-100k either struggles with large codebases or requires additional optimization for such tasks.

  • Conversely, GPT-4-32k approached the task by converting the code into Python segments corresponding to each Java class. This method could be advantageous for users to review and incorporate the code piece by piece. Nevertheless, like Claude-2-100k, GPT-4-32k also ceased its operation prematurely.

AWS Bedrock Claude-2–100k

Azure OpenAI GPT-4–32k
Python -> Java
  • Claude-2-100k was tasked with converting 350 lines of Python code into Java, it managed to produce only 28 lines of Java code before ceasing operation.

  • Similarly, GPT-4-32k also exhibited constraints, producing only a limited amount of Java code and omitting certain features in the transition from Python to Java.

The outcome of this test reveals that both models have comparable limitations in code conversion. Users employing these models for such tasks should be ready to meticulously review and adjust the output to ensure its accuracy, completeness, and compliance with their specific coding guidelines.

Additional Information: It's noteworthy that when the Claude model was accessed through its official site, it effectively translated close to 100 lines of Python code to Java and about 138 lines from Java to Python.

(2) Generating Code from Natural Language Descriptions

In assessing the code generation skills of GPT-4–32k and Claude-2–100k, both were given the same task: to create a Java program encapsulating a Tourism Agency's functionalities.

  • Response from Claude-2–100k: Claude-2–100k crafted a rudimentary structure for the program. It included the formulation of classes such as Booking and Customer, along with mentioning necessary packages.

  • Response from GPT-4–32k: GPT-4–32k delivered a response akin to Claude-2–100k, offering a basic sketch of the system with elementary features. It primarily outlined the program's framework.

Nonetheless, neither GPT-4–32k nor Claude-2–100k completed the full code for the Tourism Agency application. This indicates that Claude-2–100k, much like GPT-4–32k, exhibits a similar level of capability in generating code based on natural language prompts.

(3) Summarizing Books and Answering Questions

The notable 100,000-token window of Claude-2, equivalent to about 75,000 words or hundreds of pages, highlights its capacity to handle extensive texts.

In testing this feature, we used 'Quantum Physics for Dummies,' a 59,950-word, 338-page book. Claude was prompted to summarize it:

“You are an expert in writing summaries. Read the book provided below and write a summary. <book>…</book>”

It took Claude-2–100k over 10 minutes to respond, and the produced summary was more a general overview of quantum physics concepts rather than a specific synopsis of the book.

However, Claude-2–100k excelled in answering specific book-related questions, accurately detailing the contents of Chapter 14 when prompted.

Comparatively, GPT-4 has a 32,000-token limit, roughly 24,000 words. Although GPT-4–32k is a robust model, its lesser token capacity compared to Claude-2–100k is evident. When faced with prompts exceeding 32,000 tokens, GPT-4 encounters a “Token limit error.”

(4) Summarizing and Analyzing Documents

For further evaluation, both models were tested with a brief six-page document.



In this test, Claude-2–100k and GPT-4–32k both demonstrated their impressive abilities by correctly extracting key information from the research paper. This information encompassed the paper's title, author, date of publication, journal, and issue. Additionally, both models successfully provided clear explanations of the foundational concepts and novel contributions presented in the paper.

(5) Analyzing Data

In a data analysis task involving a CO2 emissions per capita dataset, here's how Claude-2–100k and GPT-4–32k compared:

  • Overview of Initial Data Analysis Prompted to “Analyse the below data and provide a summary,” both Claude-2–100k and GPT-4–32k accurately summarized the dataset, showcasing their proficiency in data comprehension and summarization.



  • Speed of Response: Claude-2–100k took longer to respond compared to GPT-4–32k, which was significantly faster in answering queries. This speed could be important for real-time or urgent data analysis tasks.

  • Answering Questions: Both models were adept at responding to a range of questions about the dataset, accurately addressing queries like “Which countries have the highest CO2 emissions per capita in 2020?” and “What has been the global trend in CO2 emissions per capita from 2006 to 2020?”



(6) Assessing Mathematical Skills

There are notable differences in how Claude-2–100k on AWS Bedrock and GPT-4-32k from Azure OpenAI handle math queries.

  • Claude-2–100k incorrectly solved a basic differential equation, though it managed to answer straightforward algebra and number series questions correctly.

  • In contrast, GPT-4–32k accurately answered all types of math questions, including those on differential equations, algebra, and number series.

GPT-4–32k: Differential Equation Question

Algebra Question

Number Series Question

Therefore, in these tests, GPT-4–32k outperformed Claude-2–100k in solving mathematical problems and explaining math-related queries.


  • Code Generation & Conversion: Both models exhibit comparable performance. They should be seen as aids for developers, not as complete replacements in software development.

  • Summarizing Books: Claude-2's extensive context window, capable of handling a large amount of text, makes it more suitable for summarizing or analyzing lengthy texts like books.

  • Analyzing Small Documents: Both models show similar effectiveness.

  • Analyzing Data: GPT-4–32k stands out for its faster response time, an important factor in scenarios requiring immediate data analysis.

  • Mathematical Abilities: GPT-4 demonstrates superior performance in solving and explaining mathematical problems, surpassing Claude-2.

  • Overall Assessment: While Claude-2–100k offers a larger context window with 100,000 tokens, GPT-4–32k slightly outperforms Claude-2 in various aspects.

It is crucial to recognize that Claude-2–100k's broader context is beneficial for tasks that demand thorough analysis of substantial datasets, an area where GPT-4’s limit of 32,000 tokens falls short.

The ability to process more input allows for a deeper understanding of data, which is invaluable in practical applications. The choice between models should be based on the specific needs of the project, weighing the importance of context depth against performance efficiency.


15 views0 comments


bottom of page