Case Study

Automating the generation of javascript expressions with Pre-Trained AI Models


Developing a user-friendly tool that automatically generates javascript expressions based on user input. This tool aims to streamline the integration of functionality with the existing form fields. Given that most AI offerings already support pre-trained models for code generation, we can leverage these existing models directly, eliminating the need to create a new one from scratch..


We explored leveraging pre-trained models offered by AWS Bedrock to achieve a standardized solution for automatic javascript expression generation. This approach allowed us to avoid the time and resource investment associated with creating and training a custom model from scratch.

AWS Bedrock offers a suite of foundational models that deliver robust AI capabilities tailored for code generation. Below is a comparison of various models to identify the one that best fits our requirements, along with an overview of the associated pricing for each model.

AWS Bedrock Models
Supported Use Cases
Max Tokens
Price per 1,000 input tokens
Price per 1,000 output tokens
Claude 3 Opus
Task automation, interactive coding, research review, brainstorming and hypothesis generation, advanced analysis of charts & graphs, financials and market trends, forecasting.
Better at understanding and maintaining context over longer conversations or more complex code snippets.
Higher computational requirements which may lead to slower response times and increased costs.
Claude 3 Sonnet
RAG or search & retrieval over vast amounts of knowledge, product recommendations, forecasting, targeted marketing, code generation, quality control, parse text from images.
Improved understanding of code context, useful for generating more accurate and relevant code snippets.
May not have all the latest features of the most recent models like Claude 3 Opus.
This model is giving result with an accuracy of approx. 70% to 75% after some modifications
Claude 2.1
Summarization, Q&A, trend forecasting, comparing and contrasting multiple documents, and analysis. Claude 2.1 excels at the core capabilities of Claude 2.0 and Claude Instant.
Likely to have fewer bugs and more stable performance due to iterative improvements. Generally more affordable compared to newer models.
Potentially less effective at maintaining context over long interactions compared to Claude 3 series.
This model is giving result with an accuracy of approx. 70% to 75% after some modifications
Claude 2.0
Thoughtful dialogue, content creation, complex reasoning, creativity, and coding.
Proven Stability: Known for stability and reliability in code generation. Resource-Friendly: Lower computational costs and faster response times compared to the latest models.
Less adept at handling complex or long contextual interactions.
This model is giving result with an accuracy of approx. 80%
Claude 3 Haiku
Quick and accurate support in live interactions, translations, content moderation, optimize logistics, inventory management, extract knowledge from unstructured data.
Optimized for specific types of tasks, possibly including code generation.
Might not handle a wide variety of code generation scenarios as effectively as more general models.
This model is giving result with an accuracy of approx. 70% to 75% after some modifications
Amazon Titan Text Premier
RAG, agents, chat, chain of thought, open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, rewriting, extraction, and Q&A.
High Accuracy: Expected to provide high accuracy in code generation due to advanced model architecture. Comprehensive Understanding: Good at understanding and generating complex code structures.
Requires significant computational resources, potentially leading to higher costs and slower performance.
Amazon Titan Text Express
RAG, open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, chain of thought, rewriting, extraction, Q&A, and chat.
Speed: Optimized for faster response times, making it suitable for quick code generation tasks. Efficiency: More resource-efficient, lowering operational costs.
May struggle with more complex code generation tasks compared to the Premier version.
Meta Llama 3 70B
Text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modelling, dialogue systems, code generation, and following instructions.
Scalability: Large model size suggests potential for high accuracy and detailed code generation. Versatility: Likely to be highly versatile, capable of handling a wide range of code generation tasks.
Very high computational requirements, leading to increased costs and slower processing times.
The data in the above table is sourced from AWS. Click here to learn more about the same.

We concluded that Claude 3 Sonnet is the best fit for our project based on the following factors:

Accuracy: We conducted tests to determine which model generates the most accurate expressions that meet our project requirements.

Cost: Claude 3 Sonnet incurred the lowest rates.

Input Token Support: It supports the maximum number of input tokens, which is crucial for our relatively large input prompt that may increase further for certain use cases.

Flexibility: The model can be changed if needed in the future, so our initial goal is to make the functionality work with the best model identified to date.

Step by Step Guide for Integration

Before integrating the model in codebase, it’s crucial to set up AWS Bedrock and configure access. 

Follow the step by step guide below for the same,

AWS Account Setup: Ensure you have an AWS account. If not, create one at the AWS Management Console.

IAM Role Creation: Create an IAM role with the necessary permissions to access Bedrock services. Attach policies that allow invoking Bedrock models and managing secrets for secure API key storage.

API Key Generation: Generate an API key for Bedrock access. This key will be used in our C# application to authenticate and communicate with the model.

Enabling AWS Bedrock Model: Navigate to the AWS Bedrock page and click on “Get started” to access the models. Select the model that you wish. 

Visual Studio: Now, let’s go to the Visual Studio Editor and continue with some more configurations.

Nuget Package ManagerInstall the NuGet package AWSSDK.BedrockRuntime nuget

App Settings: Store your AWS credentials securely using environment variables, the AWS credentials file, AWS Secrets Manager, or app settings.json. 

Now, we’re prepared to begin coding! Below is a straightforward example demonstrating how to instantiate the AWS Bedrock Runtime and utilize it to invoke the model:

var Content = req.Expression;

using (var client = new Amazon.BedrockRuntime.AmazonBedrockRuntimeClient(Amazon.RegionEndpoint.USEast1))
    var modelId = “anthropic.claude-3-sonnet-20240229-v1:0”;

    var nativeRequest = JsonSerializer.Serialize(new
        anthropic_version = “bedrock-2023-05-31”,
        max_tokens = 512,
        temperature = 0.5,
        messages = new[]
            new { role = “user”content = Content }

    using (var memoryStream = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)))
        var request = new InvokeModelRequest()
            ModelId = modelId,
            Body = memoryStream,
            ContentType = “application/json”

        var response = await client.InvokeModelAsync(request);
        var modelResponse = await JsonDocument.ParseAsync(response.Body);
        var responseText = modelResponse.RootElement.GetProperty(“content[0].GetProperty(“text”).GetString();

        return responseText.ToString();

This code snippet illustrates how to initialize the AWS Bedrock Runtime and utilize it to invoke the ‘Claude Sonet 3’ model, adapting the output according to specific requirements.

Note: The code may vary based on the specific model. Refer to the AWS Bedrock documentation for sample code for different models.

It's time to Test!

Let’s illustrate this with an example.

Imagine an application managing patient records, which allows for custom user-defined fields. Unlike standard fields, such as patient name or ID, users can define additional fields based on specific calculations they require. This flexibility ensures our application can cater to diverse medical scenarios without needing code modifications.

For instance, consider a doctor who wants to calculate the Body Mass Index (BMI) and display it on the report of his patient. The application doesn’t have provision to capture this information. The doctor can introduce a new field without taking help from support, and he can define a formula that will derive value for this field. 

This is where all the magic happens. The doctor doesn’t need to write formula on his own, he can just AI to write it for him!!

User Input:

“BMI of the patient”

Our code prepends and appends some boilerplate code to make it a look like a prompt which looks like following,

Final Input sent to the model:

Calculate the BMI of a patient considering the following data of a patient,
{ “ID”: “1”, “Name”: “Vaishakhi”, “HeightInCms”: “165”, “WeightInPounds”: “132” }
Output should be a return statement. Do not include comments in the output.

The JSON model is the representation of a patient’s information that is already present in our database.

Output from the model: The AI model analyzes the input and generates the corresponding javascript expression for the intended functionality. Output looks similar to this,

const bmi = weight / (height * height);
return bmi;  

Here’s a visual representation of our user interface for generating javascript expressions:



  • Accelerated Development: Integration with pre-trained models accelerates development by eliminating the need for extensive custom model training, saving time and resources.
  • Consistency and Reliability: Leveraging pre-trained models ensures a consistent and reliable approach across different project needs, enhancing stability and ease of implementation.
  • Simplified Implementation: With a user-friendly interface requiring minimal technical expertise, developers can quickly integrate advanced AI functionalities into their projects using straightforward inputs like JSON models and prompts.


Integrating AWS Bedrock AI into your applications opens up a wealth of opportunities, empowering developers to harness advanced AI capabilities effortlessly. With AWS Bedrock, complex tasks like natural language processing can seamlessly enhance functionality and user experience across various projects.

This case study illustrates the successful implementation of a pre-trained AI model from AWS Bedrock to automate javascript expression generation based on user input. This streamlined approach not only accelerates development processes and reduces resource requirements but also offers a user-friendly method to integrate functionalities with existing form fields.

Throughout this blog, we have outlined the setup of AWS Bedrock in a .NET environment, emphasizing essential steps such as installing the AWS SDK, configuring credentials, and creating a Bedrock client. By demonstrating how to execute AI models and retrieve results, we’ve showcased the simplicity and power of AWS Bedrock’s API.

For further insights on leveraging AWS Bedrock for your projects, click here.

Explore the provided code snippet and embark on your own AI integration journey with AWS Bedrock. Whether for JavaScript expressions or other tailored needs, this guide equips you to implement and customize AI solutions effectively.

Stay inspired, innovate boldly, and happy coding!

About Author

Vaishakhi Panchmatia

As Tech Co-Founder at Yugensys, I’m passionate about fostering innovation and propelling technological progress. By harnessing the power of cutting-edge solutions, I lead our team in delivering transformative IT services and Outsourced Product Development. My expertise lies in leveraging technology to empower businesses and ensure their success within the dynamic digital landscape.

Looking to augment your software engineering team with a team dedicated to impactful solutions and continuous advancement, feel free to connect with me. Yugensys can be your trusted partner in navigating the ever-evolving technological landscape.

Subscribe to our newsletter.

Related Articles

© 2016-2024 Yugensys. All rights reserved.