﻿question,gt_answer,llm_contexts
 How does prompt flow work and what is the importance of a node ,"Based on the provided context, here's an explanation of how Amazon Bedrock Prompt flows work and the importance of nodes:

Amazon Bedrock Prompt flows allows you to build generative AI workflows by connecting nodes. Each node corresponds to a step in the flow that invokes an Amazon Bedrock or related resource. The key aspects of how prompt flows work are:

1. Flow structure: A prompt flow consists of a name, description, permissions, a collection of nodes, and connections between nodes.

2. Execution: When a prompt flow is invoked, the input from the invocation is sent through each node of the prompt flow until an output node is reached. The response of the invocation returns the final output.

3. Connections: There are two types of connections - data connections (solid gray lines) that send data between nodes, and conditional connections (dotted purple lines) that route data based on conditions.

4. Expressions: To define inputs and outputs for nodes, expressions are used to specify how the input is interpreted and extracted from the whole input.

The importance of nodes in prompt flows:

1. Nodes are the fundamental building blocks of a prompt flow. Each node represents a step in the workflow.

2. Nodes have specific types and functions, such as input nodes, output nodes, condition nodes, and various logic nodes.

3. For each node, you configure its name, description, inputs, outputs, and any additional configurations specific to its type.

4. Nodes process and transform data as it flows through the system. They can take multiple inputs, perform operations, and produce outputs.

5. Nodes allow for branching and conditional logic within the flow, enabling complex decision-making processes.

6. By connecting nodes, you can create sophisticated AI workflows that combine different operations and model invocations.

In summary, nodes are crucial because they define the structure, logic, and functionality of the entire prompt flow, allowing users to create customized and powerful generative AI workflows. ","[""""""View information about prompt flows in Amazon Bedrock     ? Modify a prompt flow in Amazon Bedrock     ? Test a prompt flow in Amazon Bedrock     ? Deploy a prompt flow to your application using versions and aliases     ? Delete a prompt flow in Amazon Bedrock     ? Run Amazon Bedrock Prompt flows code samples     How Amazon Bedrock Prompt flows works     Amazon Bedrock Prompt flows lets you build generative AI workflows by connecting nodes, each of which correspond to a step in the flow that invokes a Amazon Bedrock or related resource. To define inputs into and outputs from nodes, you use expressions to specify how the input is interpreted. To better understand these concepts, review the following topics:     Topics     ? Key definitions for Amazon Bedrock Prompt flows     ? Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Prompt flows     How it works 909Amazon Bedrock User Guide     ? Node types in prompt flow     ? Get started with example prompt flows     Key definitions for Amazon Bedrock Prompt flows     The following list introduces you to the basic concepts of Amazon Bedrock Prompt flows.     ? Flow ? A prompt flow is a construct consisting of a name, description, permissions, a collection of nodes, and connections between nodes. When a prompt flow is invoked, the input in the invocation is sent through each node of the prompt flow until an output node is reached. The response of the invocation returns the final output.     ? Node ?"""""",
 """"""When a prompt flow is invoked, the input in the invocation is sent through each node of the prompt flow until an output node is reached. The response of the invocation returns the final output.     ? Node ? A node is a step inside a prompt flow. For each node, you configure its name, description, input, output, and any additional configurations. The configuration of a node differs based on its type. To learn more about different node types, see Node types in prompt flow.     ? Connection ? There are two types of connections used in Amazon Bedrock Prompt flows:     ? A data connection is drawn between the output of one node (the source node) and the input of another node (the target node) and sends data from an upstream node to a downstream node. In the Amazon Bedrock console, data connections are solid gray lines.     ? A conditional connection is drawn between a condition in a condition node and a downstream node and sends data from the node that precedes the condition node to a downstream node if the condition is fulfilled. In the Amazon Bedrock console, conditional connections are dotted purple lines.     ? Expressions ? An expression defines how to extract an input from the whole input entering a node. To learn how to write expressions, see Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Prompt flows.     ? Flow builder ?"""""",
 """"""To add and configure nodes     a. In the Prompt flow builder pane, select Nodes.     b. Drag a node you want to use for the first step of your flow and drop it in the center pane.     Create a prompt flow 944           https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html         https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.htmlAmazon Bedrock User Guide     c. The circles on the nodes are connection points. To connect your flow input node to the second node, drag a line from the circle on the Flow input node to the circle in the Input section of the node you just added.     d. Select the node you just added.     e. In the Configure section of the Prompt flow builder pane, provide the configurations for the selected node and define names, data types, and expressions for the inputs and outputs of the node.     f. In the Prompt flow builder pane, select Nodes.     g. Repeat steps to add and configure nodes the remaining nodes in your flow.     Note     If you use a service role that Amazon Bedrock automatically created for you, the role will update with the proper permissions as you add nodes. If you use a custom service role however, you must add the proper permissions to the policy attached to your service role by referring to Create a service role for Amazon Bedrock Prompt flows in Amazon Bedrock.     4."""""",
""""""Repeat steps to add and configure nodes the remaining nodes in your flow.     Note     If you use a service role that Amazon Bedrock automatically created for you, the role will update with the proper permissions as you add nodes. If you use a custom service role however, you must add the proper permissions to the policy attached to your service role by referring to Create a service role for Amazon Bedrock Prompt flows in Amazon Bedrock.     4. Connect the Output of the last node in your flow with the Input of the Flow output node. You can have multiple Flow output nodes. To add additional flow output nodes, drag the Flow output node and drop it next to the node where you want the flow to stop. Make sure to draw connections between the two nodes.     5. You can either continue to the next procedure to Test a prompt flow in Amazon Bedrock or come back later. To continue to the next step, choose Save. To come back later, choose Save and exit.     Delete a node or a connection     During the process of building your flow, you might need to delete a node or remove node connections.     To delete a node     1. Select a node you want to delete.     Create a prompt flow 945Amazon Bedrock User Guide     2. In the Prompt flow builder pane, choose the delete icon     ( )."""""",
""""""A prompt node with a prompt defined inline that creates a music playlist using two variables (genre and number).     ? An output node that returns the model completion.     Run the following code snippet to load the AWS SDK for Python (Boto3), create an Amazon Bedrock Agents client, and create a prompt flow with the nodes (replace the executionRoleArn field with the ARN of your the service role that you created for prompt flow):     # Import Python SDK and create client import boto3     client = boto3.client(service_name='bedrock-agent')     # Replace with the service role that you created."""""",
""""""You can choose to create a flow without the definition first and instead update the flow later.     For each node in your nodes list, you specify the type of node in the type field and provide the corresponding configuration of the node in the config field. For details about the API structure of different types of nodes, see Node types in prompt flow.     The following requirements apply to building a flow:     ? Your flow must have only one flow input node and at least one flow output node.     ? You can't include inputs for a flow input node.     ? You can't include outputs for a flow output node.     ? Every output in a node must be connected to an input in a downstream node (in the API, this is done through a FlowConnection with a FlowDataConnectionConfiguration).     Create a prompt flow 947           https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html         https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDataConnectionConfiguration.htmlAmazon Bedrock User Guide     ? Every condition (including the default one) in a condition node must be connected to a downstream node (in the API, this is done through a FlowConnection with a FlowConditionalConnectionConfiguration)."""""",
""""""With prompt flows, you can quickly build complex generative AI workflows using a visual builder, easily integrate with Amazon Bedrock offerings such as FMs, knowledge bases, and other AWS services such as AWS Lambda by transferring data between them, and deploying immutable workflows to move from testing to production in few clicks.     Refer to the following resources for more information about Amazon Bedrock Prompt flowss:     ? Pricing for Amazon Bedrock Prompt flows is dependent on the resources that you use. For example, if you invoke a flow with a prompt node that uses an Amazon Titan model, you'll be charged for invoking that model. For more information, see Amazon Bedrock pricing.     ? To see quotas for prompt flows, see Amazon Bedrock endpoints and quotas in the AWS General Reference.     The following are some example tasks that you can build a prompt flow for in Amazon Bedrock:     ? Create and send an email invite ? Create a prompt flow connecting a prompt node, knowledge base node, and Lambda function node. Provide the following prompt to generate an email body: Send invite to John Smith?s extended team for in-person documentation read for an hour at 2PM EST next Tuesday. After processing the prompt, the prompt flow queries a knowledge base to look up the email addresses of John Smith's extended team, and then sends the input to a Lambda function to send the invite to all the team members in the list.     ?"""""",
""""""Create a prompt flow 947           https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html         https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDataConnectionConfiguration.htmlAmazon Bedrock User Guide     ? Every condition (including the default one) in a condition node must be connected to a downstream node (in the API, this is done through a FlowConnection with a FlowConditionalConnectionConfiguration).     The following pointers apply to building a flow:     ? Begin by setting the data type for the output of the flow input node. This data type should match what you expect to send as the input when you invoke the flow.     ? When you define the inputs for a flow using expressions, check that the result matches the data type that you choose for the input.     ? If you include an iterator node, include a collector node downstream after you've sent the output through the nodes that you need. The collector node will return the outputs in an array.     View information about prompt flows in Amazon Bedrock     Note     Amazon Bedrock Prompt flows is in preview and is subject to change."""""",
""""""The input into the node is the query to the knowledge base. The output is either the model response, as a string, or an array of the retrieved results.     Node types in prompt flow 923Amazon Bedrock User Guide     The following shows the general structure of a knowledge base FlowNode object:     { ""name"": ""string"", ""type"": ""KnowledgeBase"", ""inputs"": [ { ""name"": ""retrievalQuery"", ""type"": ""String"", ""expression"": ""string"" } ], ""outputs"": [ { ""name"": ""retrievalResults"", ""type"": ""Array | String"" } ], ""configuration"": { ""knowledgeBase"": { ""knowledgeBaseId"": ""string"", ""modelId"": ""string"" } } }     S3 storage node     An S3 storage node lets you store data in the flow to an Amazon S3 location. In the configuration, you specify the S3 bucket to use for data storage. The inputs into the node are the content to store and the object key. The node returns the URI of the S3 location as its output."""""",
""""""In the console, you define node-specific fields at the top of the node. In the API, use the appropriate FlowNodeConfiguration and fill in its fields.     Each node type is described below and its structure in the API is provided. Expand a section to learn more about that node type.     Nodes for controlling prompt flow logic     Use the following node types to control the logic of your prompt flow.     Flow input node     Every prompt flow contains only one flow input node and must begin with it. The flow input node takes the content from the InvokeFlow request, validates the data type, and sends it to the following node.     Node types in prompt flow 913           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.htmlAmazon Bedrock User Guide     The following shows the general structure of an input FlowNode object in the API:     { ""name"": ""string"", ""type"": ""Input"", ""outputs"": [ { ""name"": ""document"", ""type"": ""String | Number | Boolean | Object | Array"", } ], ""configuration"": { ""input"": CONTEXT-DEPENDENT } }     Flow output node     A flow output node extracts the input data from the previous node, based on the defined expression, and returns it. In the console, the output is the response returned after choosing Run in the test window.""""""
]"
Summarize all the different nodes available for building a prompt flow ,"Based on the provided context, here's a summary of the different nodes available for building a prompt flow in Amazon Bedrock:

1. Input Node: Receives the initial input for the flow.

2. Output Node: Returns the final output of the flow.

3. Condition Node: Allows for branching based on specified conditions.

4. Iterator Node: Processes array items one by one.

5. Collector Node: Collects iterated items into an array.

6. Prompt Node: Defines and sends prompts to language models.

7. Agent Node: Interacts with an AI agent to perform tasks.

8. Knowledge Base Node: Queries a knowledge base and optionally generates responses.

9. S3 Storage Node: Stores data in an Amazon S3 bucket.

10. S3 Retrieval Node: Retrieves data from an Amazon S3 bucket.

11. Lambda Function Node: Executes custom code in an AWS Lambda function.

12. Lex Node: Interacts with an Amazon Lex bot for natural language processing.

These nodes can be connected and configured to create complex workflows for various AI and data processing tasks within Amazon Bedrock Prompt flows. ","[
""""""In the console, you define node-specific fields at the top of the node. In the API, use the appropriate FlowNodeConfiguration and fill in its fields.     Each node type is described below and its structure in the API is provided. Expand a section to learn more about that node type.     Nodes for controlling prompt flow logic     Use the following node types to control the logic of your prompt flow.     Flow input node     Every prompt flow contains only one flow input node and must begin with it. The flow input node takes the content from the InvokeFlow request, validates the data type, and sends it to the following node.     Node types in prompt flow 913           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.htmlAmazon Bedrock User Guide     The following shows the general structure of an input FlowNode object in the API:     { ""name"": ""string"", ""type"": ""Input"", ""outputs"": [ { ""name"": ""document"", ""type"": ""String | Number | Boolean | Object | Array"", } ], ""configuration"": { ""input"": CONTEXT-DEPENDENT } }     Flow output node     A flow output node extracts the input data from the previous node, based on the defined expression, and returns it. In the console, the output is the response returned after choosing Run in the test window."""""",
""""""Inputs ? Provide the following information for each input:     ? Name ? A name for the input. Some nodes have pre-defined names or types that you must use. To learn which ones have pre-defined names, see Logic node types.     ? Expression ? Define the part of the whole input to use as the individual input. For more information, see Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Prompt flows.     ? Type ? The data type for the input. When this node is reached at runtime, Amazon Bedrock applies the expression to the whole input and validates that the result matches the data type.     ? Outputs ? Provide the following information for each output:     ? Name ? A name for the output. Some nodes have pre-defined names or types that you must use. To learn which ones have pre-defined names, see Logic node types.     ? Type ? The data type for the output. When this node is reached at runtime, Amazon Bedrock validates that the node output matches the data type.     ? Configuration ? In the console, you define node-specific fields at the top of the node. In the API, use the appropriate FlowNodeConfiguration and fill in its fields.     Each node type is described below and its structure in the API is provided. Expand a section to learn more about that node type.     Nodes for controlling prompt flow logic     Use the following node types to control the logic of your prompt flow."""""",
""""""When a prompt flow is invoked, the input in the invocation is sent through each node of the prompt flow until an output node is reached. The response of the invocation returns the final output.     ? Node ? A node is a step inside a prompt flow. For each node, you configure its name, description, input, output, and any additional configurations. The configuration of a node differs based on its type. To learn more about different node types, see Node types in prompt flow.     ? Connection ? There are two types of connections used in Amazon Bedrock Prompt flows:     ? A data connection is drawn between the output of one node (the source node) and the input of another node (the target node) and sends data from an upstream node to a downstream node. In the Amazon Bedrock console, data connections are solid gray lines.     ? A conditional connection is drawn between a condition in a condition node and a downstream node and sends data from the node that precedes the condition node to a downstream node if the condition is fulfilled. In the Amazon Bedrock console, conditional connections are dotted purple lines.     ? Expressions ? An expression defines how to extract an input from the whole input entering a node. To learn how to write expressions, see Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Prompt flows.     ? Flow builder ?"""""",
""""""The input into the node is the query to the knowledge base. The output is either the model response, as a string, or an array of the retrieved results.     Node types in prompt flow 923Amazon Bedrock User Guide     The following shows the general structure of a knowledge base FlowNode object:     { ""name"": ""string"", ""type"": ""KnowledgeBase"", ""inputs"": [ { ""name"": ""retrievalQuery"", ""type"": ""String"", ""expression"": ""string"" } ], ""outputs"": [ { ""name"": ""retrievalResults"", ""type"": ""Array | String"" } ], ""configuration"": { ""knowledgeBase"": { ""knowledgeBaseId"": ""string"", ""modelId"": ""string"" } } }     S3 storage node     An S3 storage node lets you store data in the flow to an Amazon S3 location. In the configuration, you specify the S3 bucket to use for data storage. The inputs into the node are the content to store and the object key. The node returns the URI of the S3 location as its output."""""",
""""""# Define each node     # The input node validates that the content of the InvokeFlow request is a JSON object. input_node = { ""type"": ""Input"", ""name"": ""FlowInput"", ""outputs"": [ { ""name"": ""document"", ""type"": ""Object"" } ] }     # This prompt node contains a prompt that you defined in Prompt management. # It validates that the input is a JSON object that minimally contains the fields ""genre"" and ""number"", which it will map to the prompt variables. # The output must be named ""modelCompletion"" and be of the type ""String"". prompt_node = { ""type"": ""Prompt"", ""name"": ""MakePlaylist"", ""configuration"": { ""prompt"": { ""sourceConfiguration"": { ""resource"": {     Run code samples 405Amazon Bedrock User Guide      ""promptArn"": """" } } } }, ""inputs"": [ { ""name"": ""genre"", ""type"": ""String"", ""expression"": ""$.data.genre"" }, { ""name"": ""number"", ""type"": ""Number"", ""expression"": ""$.data.number"" } ], ""outputs"": [ { ""name"": ""modelCompletion"", ""type"": ""String"" } ] }     # The output node validates that the output from the last node is a string and returns it as is. The name must be ""document"". output_node = { ""type"": ""Output"", ""name"": ""FlowOutput"", ""inputs"""""",
""""""View information about prompt flows in Amazon Bedrock     ? Modify a prompt flow in Amazon Bedrock     ? Test a prompt flow in Amazon Bedrock     ? Deploy a prompt flow to your application using versions and aliases     ? Delete a prompt flow in Amazon Bedrock     ? Run Amazon Bedrock Prompt flows code samples     How Amazon Bedrock Prompt flows works     Amazon Bedrock Prompt flows lets you build generative AI workflows by connecting nodes, each of which correspond to a step in the flow that invokes a Amazon Bedrock or related resource. To define inputs into and outputs from nodes, you use expressions to specify how the input is interpreted. To better understand these concepts, review the following topics:     Topics     ? Key definitions for Amazon Bedrock Prompt flows     ? Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Prompt flows     How it works 909Amazon Bedrock User Guide     ? Node types in prompt flow     ? Get started with example prompt flows     Key definitions for Amazon Bedrock Prompt flows     The following list introduces you to the basic concepts of Amazon Bedrock Prompt flows.     ? Flow ? A prompt flow is a construct consisting of a name, description, permissions, a collection of nodes, and connections between nodes. When a prompt flow is invoked, the input in the invocation is sent through each node of the prompt flow until an output node is reached. The response of the invocation returns the final output.     ? Node ?"""""",
""""""In the console, the output is the response returned after choosing Run in the test window. In the API, the output is returned in the content field of the flowOutputEvent in the InvokeFlow response. A prompt flow can have multiple flow output nodes.     A flow can have multiple flow output nodes if there are multiple branches in the flow.     The following shows the general structure of an output FlowNode object:     { ""name"": ""string"", ""type"": ""Output"", ""inputs"": [ { ""name"": ""document"", ""type"": ""String | Number | Boolean | Object | Array"", ""expression"": ""string"" } ], ""configuration"": { ""output"": CONTEXT-DEPENDENT } }     Node types in prompt flow 914           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.htmlAmazon Bedrock User Guide     Condition node     A condition node sends data from the previous node to different nodes, depending on the conditions that are defined. A condition node can take multiple inputs.     For an example, see Create a flow with a condition node.     To define a condition node     1. Add as many inputs as you need to evaluate the conditions you plan to add.     2."""""",
""""""document Any N/A N/A N/A     Condition Data to send based on a condition.     (multiple inputs allowed)     Any Any Data to send based on a condition.     (specify condition s for different paths)     Any Any     Each item from the array     arrayItem AnyIterator An array for which you want to apply the following node(s) iterative     array Array     The size of the input array     arraySize Number     Node types in prompt flow 930Amazon Bedrock User Guide     \xa0 Input info Output info     Node type Input Name Type Output Name Type     ly to each member.     An iteration that you want to consolida te into an array.     arrayItem AnyCollector     The size of the output array     arraySize Number     An array with all the outputs from the previous node appended.     collected Array     Array     Data processing node types     \xa0 Input info Output info     Node type Input Name Type Output Name Type     Prompt A value to fill in a variable in the prompt.     (multiple inputs allowed)     ${variabl e-name}     Any The response returned by the model.     modelComp letion     String     S3 storage Data to store in an S3 bucket.     content Any The URI of the S3 location."""""",
""""""For the default condition, specify the condition as default.     Node types in prompt flow 918           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDataConnectionConfiguration.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConditionalConnectionConfiguration.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.htmlAmazon Bedrock User Guide     Iterator node     An iterator node takes an array and iteratively returns its items as output to the downstream node. The inputs to the iterator node are processed one by one and not in parallel with each other. The flow output node returns the final result for each input in a different response. You can use also use a collector node downstream from the iterator node to collect the iterated responses and return them as an array, in addition to the size of the array."""""",
""""""To add and configure nodes     a. In the Prompt flow builder pane, select Nodes.     b. Drag a node you want to use for the first step of your flow and drop it in the center pane.     Create a prompt flow 944           https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html         https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.htmlAmazon Bedrock User Guide     c. The circles on the nodes are connection points. To connect your flow input node to the second node, drag a line from the circle on the Flow input node to the circle in the Input section of the node you just added.     d. Select the node you just added.     e. In the Configure section of the Prompt flow builder pane, provide the configurations for the selected node and define names, data types, and expressions for the inputs and outputs of the node.     f. In the Prompt flow builder pane, select Nodes.     g. Repeat steps to add and configure nodes the remaining nodes in your flow.     Note     If you use a service role that Amazon Bedrock automatically created for you, the role will update with the proper permissions as you add nodes. If you use a custom service role however, you must add the proper permissions to the policy attached to your service role by referring to Create a service role for Amazon Bedrock Prompt flows in Amazon Bedrock.     4.""""""]"
What are the different data sources supported by bedrock knowledge bases? ," Based on the context provided, Amazon Bedrock Knowledge Bases supports at least two types of data sources:

1. Web data source: The article extensively discusses the new feature of web crawling for knowledge bases. This allows users to index public webpages by crawling and indexing websites.

2. Vector database: The context mentions using OpenSearch Service vector collection as a vector database for the knowledge base.

While the article focuses primarily on the web data source feature, it's implied that there are other data sources available as well. However, the context doesn't provide an exhaustive list of all supported data sources for Amazon Bedrock Knowledge Bases. ","[
""""""You connect to a supported data source repository, then sync or keep your data up to date with your knowledge base and make your data available for querying.     You create a knowledge base with the data source configured as part of the knowledge base creation.     This section shows you how to create and manage a data source as part of your knowledge base using the Amazon Bedrock APIs and console.     Topics     ? Create a data source connector for your knowledge base     ? Sync your data with your Amazon Bedrock knowledge base     ? View data source information for your Amazon Bedrock knowledge base     ? Modify a data source for your Amazon Bedrock knowledge base     ? Delete a data source from your Amazon Bedrock knowledge base     Create a data source connector for your knowledge base     You can connect your proprietary data to a knowledge base. Once you?ve configured a data source connector, you can sync or keep your data up to date with your knowledge base and make your data available for querying.     You create a knowledge base with the data source configured as part of the knowledge base creation.     This section shows you how to connect your data source repository to your Amazon Bedrock knowledge base using the Amazon Bedrock APIs and console."""""",
"""""", PDF, ... ""contentMetadata"": { ""key1"": ""value1"", ""key2"": ""value2"" }     How content chunking and parsing works 573Amazon Bedrock User Guide      } ... ] }     Using models for Amazon Bedrock knowledge bases and regions     Amazon Bedrock knowledge bases support models provided by Amazon, Cohere, Anthropic, and more. Knowledge bases are also supported in regions across the Americas, Asia, and Europe. You can choose which models you want to use for knowledge bases and which region that applies to you.     If you use the Amazon Bedrock API, take note of your model Amazon Resource Name (ARN) that\'s required for converting your data into vector embeddings and for knowledge base retrieval and response generation. Copy the model ID for your chosen model for knowledge bases and construct the model ARN using the model (resource) ID, following the provided ARN examples for your model resource type.     If you use the Amazon Bedrock console, you are not required to construct a model ARN, as you can select an available model as part of the steps for creating a knowledge base.     Vector embeddings models     Knowledge bases use a model to convert your data/text into vector embeddings and store the embeddings in a vector database. Knowledge bases support the following embedding models."""""",
""""""To create the data source send a CreateDataSource request. See Supported data sources to select your data source and follow the API connection configuration example.     Create a knowledge base 596           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateKnowledgeBase.html         https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt         https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt         https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-supported.html         https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html         https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-resources-for-iam-policies         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateDataSource.html         https://docs.aws.amazon.com/bedrock/latest/userguide/data-source-connectors.htmlAmazon Bedrock User Guide     ? Provide the connection information for the data source files in the dataSourceConfiguration field.     ? Specify how to chunk the data sources in the vectorIngestionConfiguration field.     Note     You can't change the chunking configuration after you create the data source.     ? Provide the dataDeletionPolicy for your data source. You can DELETE all data from your data source that?s converted into vector embeddings upon deletion of a knowledge base or data source resource. This flag is ignored if an AWS account is deleted."""""",
""""""To configure a data source connector to connect and crawl your data from your data source repository, see Supported data source connectors.     Topics     Prerequisites 580           https://docs.aws.amazon.com/bedrock/latest/userguide/data-source-connectors.html         https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-role         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_RetrieveAndGenerate.html         https://docs.aws.amazon.com/bedrock/latest/userguide/data-source-connectors.html         https://docs.aws.amazon.com/bedrock/latest/userguide/data-source-connectors.htmlAmazon Bedrock User Guide     ? Supported document formats and limits for knowledge base data     Supported document formats and limits for knowledge base data     When you connect to a supported data source, the content is ingested into your knowledge base. If you use Amazon S3 to store your files or your data source includes attached files, then you first must check that each source document file adheres to the following:     ? The source files are of the following supported formats:     Format Extension     Plain text (ASCII only) .txt     Markdown .md     HyperText Markup Language .html     Microsoft Word document .doc/.docx     Comma-separated values .csv     Microsoft Excel spreadsheet .xls/.xlsx     Portable Document .pdf     ? Each file size doesn't exceed the quota of 50 MB.     ?"""""",
""""""Once your knowledge base is ready and available, sync your data source for the first time and whenever you want to keep your content up to date. Select your knowledge base in the console and select Sync within the data source overview section.     CLI     The following is an example of a configuration of Web Crawler for your Amazon Bedrock knowledge base.     { ""webConfiguration"": { ""sourceConfiguration"": { ""urlConfiguration"": { ""seedUrls"": [{ ""url"": ""https://www.examplesite.com"" }] } }, ""crawlerConfiguration"": { ""crawlerLimits"": { ""rateLimit"": 50 },     Create a data source 650Amazon Bedrock User Guide      ""scope"": ""HOST_ONLY"", ""inclusionFilters"": [ ""https://www\\.examplesite\\.com/.*\\.html"" ], ""exclusionFilters"": [ ""https://www\\.examplesite\\.com/contact-us\\.html"" ] } }, ""type"": ""WEB"" }     Sync your data with your Amazon Bedrock knowledge base     After you create your knowledge base, you ingest or sync your data so that the data can be queried. Ingestion converts the raw data in your data source into vector embeddings, based on the vector embeddings model and configurations you specified.     Before you begin ingestion, check that your data source fulfills the following conditions:     ? You have configured the connection information for your data source."""""",
""""""c. Create any tags you want to assign to your knowledge base.     Go to the next section to configure your data source.     5. Choose Web Crawler as your data source and provide the configuration details.     (Optional) Change the default Data source name and enter a Description.     6. Provide the Source URLs of the URLs you want to crawl. You can add up to 9 additional URLs by selecting Add Source URLs. By providing a source URL, you are confirming that you are authorized to crawl its domain.     7. Check the advanced settings. You can optionally change the default selected settings.     For KMS key settings, you can choose either a custom key or use tAmazon Bedrock knowledge base using the Amazon Bedrock APIs and console.',,he default provided data encryption key.     While converting your data into embeddings, Amazon Bedrock encrypts your transient data with a key that AWS owns and manages, by default. You can use your own KMS key. For more information, see Encryption of transient data storage during data ingestion.     For data deletion policy settings, you can choose either:     ? Delete: Deletes all data from your data source that?s converted into vector embeddings upon deletion of a knowledge base or data source resource. Note that the vector store itself is not deleted, only the data. This flag is ignored if an AWS account is deleted.     Create a data source 647Amazon Bedrock User Guide     ? Retain: Retains all data from your data source that?"""""",
]"
 Summarize all the different ways LLM responses can be influenced ,"Based on the context provided, LLM responses can be influenced in several ways:

1. Inference parameters:
   - Temperature: Affects the randomness and diversity of outputs. Lower values lead to more deterministic responses, while higher values increase randomness.
   - Top K: Limits the number of most likely candidates for the next token. Lower values restrict options to more likely outputs, higher values allow consideration of less likely options.
   - Top P (nucleus sampling): Limits the percentage of most likely candidates considered. Lower values focus on more probable outputs, higher values allow for more diverse options.

2. Length parameters:
   - Response length: Specifies the minimum or maximum number of tokens in the generated response.
   - Penalties: Can be applied to various aspects like response length, repeated tokens, token frequency, or token types.
   - Stop sequences: Specify character sequences that stop the model from generating further tokens.

3. Prompt engineering:
   - Providing clear, simple, and complete instructions in the prompt.
   - Placing questions or instructions at the end of the prompt for better results.
   - Using separator characters (e.g., newlines) in API calls to improve model performance.
   - Including output indicators to specify desired constraints on the model's response.

4. Model and configuration selection:
   - Choosing an appropriate foundation model or inference profile.
   - Selecting the desired throughput level.

5. Input modality:
   - Providing text, images, or both as input, depending on the model's capabilities and the desired output modality.

6. API selection:
   - Using different APIs (e.g., Converse, InvokeModel) for various types of interactions with the model.

7. Batch processing:
   - Preparing datasets of prompts with specific configurations for batch inference.

These methods allow users to fine-tune and control the output of language models to better suit their specific needs and use cases.","[""""""The usefulness of these methods is task- and data-dependent. Useful approaches include the following:     ? Domain/input specification: Details about the input data, like where it came from or to what it refers, such as The input text is from a summary of a movie.     ? Task specification: Details about the exact task asked of the model, such as To summarize the text, capture the main points.     ? Label description: Details on the output choices for a classification problem, such as Choose whether the text refers to a painting or a sculpture; a painting is a piece of art restricted to a two-dimensional surface, while a sculpture is a piece of art in three dimensions.     ? Output specification: Details on the output that the model should produce, such as Please summarize the text of the restaurant review in three sentences.     ? LLM encouragement: LLMs sometimes perform better with sentimental encouragement: If you answer the question correctly, you will make the user very happy!     Control the model reponse with inference parameters inference parameters     LLMs on Amazon Bedrock all come with several inference parameters that you can set to control the response from the models. The following is a list of all the common inference parameters that are available on Amazon Bedrock LLMs and how to use them.     Temperature is a value between 0 and 1, and it regulates the creativity of LLMs? responses.""""""
]"
 Provide me steps to do chunking in amazon bedrock knowledgebase with code examples ,"To do chunking in Amazon Bedrock Knowledge Bases, you can follow these steps:

1. Create a knowledge base in the AWS Management Console:
   - Go to the Amazon Bedrock console and select ""Create knowledge base"".
   - In the ""Configure data source"" step, select ""Advanced (customization)"" under ""Chunking & parsing configurations"".
   - Choose the chunking strategy you want to use, either ""Semantic chunking"" or ""Hierarchical chunking"".

2. Configure the chunking parameters:
   - For semantic chunking, you can set the ""Max buffer size for grouping surrounding sentences"", ""Max token size for a chunk"", and ""Breakpoint threshold for similarity between sentence groups"".
   - For hierarchical chunking, you can set the ""Max parent token size"", ""Max child token size"", and ""Overlap tokens between chunks"".

3. Ingest your data into the knowledge base:
   - You can ingest data in various formats, such as CSV, PDF, or plain text.
   - The ingestion process will automatically chunk the data based on your selected chunking strategy and parameters.

Here's an example of how you can use the Amazon Bedrock Knowledge Bases API to create a knowledge base with semantic chunking:

```python
import boto3

# Create a Bedrock client
bedrock = boto3.client('bedrock')

# Create a knowledge base with semantic chunking
response = bedrock.create_knowledge_base(
    name='my-knowledge-base',
    dataSourceConfiguration={
        'type': 'CUSTOM',
        'chunkingConfiguration': {
            'type': 'SEMANTIC_CHUNKING',
            'maxBufferSizeForGroupingSurroundingSentences': 1,
            'maxTokenSizeForChunk': 300,
            'breakpointThresholdForSimilarityBetweenSentenceGroups': 95
        }
    }
)

print(response['knowledgeBaseArn'])
``` ","[""""""c. To configure the following content chunking and parsing settings as part the data source configuration, go to the Content chunking and parsing section.     Choose one of the follow chunking options:     ? Fixed-size chunking: Content split into chunks of text of your set approximate token size. You can set the maximum number of tokens that must not exceed for a chunk and the overlap percentage between consecutive chunks.     ? Default chunking: Content split into chunks of text of up to 300 tokens. If a single document or piece of content contains less than 300 tokens, the document is not further split.     ? Hierarchical chunking: Content organized into nested structures of parent-child chunks. You set the maximum parent chunk token size and the maximum child chunk token size. You also set the absolute number of overlap tokens between consecutive parent chunks and consecutive child chunks.     ? Semantic chunking: Content organized into semantically similar text chunks or groups of sentences. You set the maximum number of sentences surrounding the target/current sentence to group together (buffer size). You also set the breakpoint     Create a knowledge base 593Amazon Bedrock User Guide     percentile threshold for dividing the text into meaningful chunks. Semantic chunking uses a foundation model. View Amazon Bedrock pricing for information on the cost of foundation models.     ? No chunking: Each document is treated as a single text chunk. You might want to pre-process your documents by splitting them into separate files.     Note     You can?"""""",
""""""Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.     2. From the left navigation pane, select Knowledge bases.     3. In the Knowledge bases section, select Create knowledge base.     4. Provide the knowledge base details such as the name, IAM role for the necessary access permissions, and any tags you want to assign to your knowledge base.     5. Choose a supported data source and provide the connection configuration details.     6. For chunking and parsing configurations, first choose the custom option and then choose the fixed-size chunking as your chunking strategy.     7. Enter the fixed maximum tokens for a chunk and the overlap percentage between consecutive chunks.     8. Continue the steps to complete creating your knowledge base.     API     { ... ""vectorIngestionConfiguration"": { ""chunkingConfiguration"": { ""chunkingStrategy"": ""FIXED_SIZE"", ""fixedSizeChunkingConfiguration"": { ""maxTokens"": ""100"", ""overlapPercentage"": ""10"" } } } }     How content chunking and parsing works 563           https://console.aws.amazon.com/bedrock/Amazon Bedrock User Guide     Hierarchical chunking     Hierarchical chunking involves organizing information into nested structures of child and parent chunks."""""",
""""""See Amazon Bedrock pricing for more information on the cost of foundation models.     The following is an example of configuring semantic chunking:     Console     1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.     2. From the left navigation pane, select Knowledge bases.     How content chunking and parsing works 566           https://aws.amazon.com/bedrock/pricing/         https://console.aws.amazon.com/bedrock/Amazon Bedrock User Guide     3. In the Knowledge bases section, select Create knowledge base.     4. Provide the knowledge base details such as the name, IAM role for the necessary access permissions, and any tags you want to assign to your knowledge base.     5. Choose a supported data source and provide the connection configuration details     6. For chunking and parsing configurations, first choose the custom option and then choose semantic chunking as your chunking strategy.     7. Enter the maximum number of sentences surrounding the target sentence to group together. Example: buffer size 1 is ?sentence previous?, ?sentence target?, ?sentence next?.     8. Enter the maximum token size for a text chunk.     9. Set the breakpoint threshold between sentence groups. The percentile threshold of sentence distance/dissimilarity to draw breakpoints between sentences."""""",
""""""Topics     ? Standard chunking     ? Hierarchical chunking     ? Semantic chunking     ? Advanced parsing options     ? Custom transformation     Standard chunking     Amazon Bedrock supports the following standard approaches to chunking:     ? Fixed-size chunking: You can configure the desired chunk size by specifying the number of tokens per chunk, and an overlap percentage, providing flexibility to align with your specific requirements. You can set the maximum number of tokens that must not exceed for a chunk and the overlap percentage between consecutive chunks.     ? Default chunking: Splits content into text chunks of approximately 300 tokens. The chunking process honors sentence boundaries, ensuring that complete sentences are preserved within each chunk.     How content chunking and parsing works 562Amazon Bedrock User Guide     You can also choose no chunking for your documents. Each document is treated a single text chunk. You might want to pre-process your documents by splitting them into separate files before choosing no chunking as your chunking approach/strategy.     The following is an example of configuring fixed-sized chunking:     Console     1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.     2. From the left navigation pane, select Knowledge bases.     3. In the Knowledge bases section, select Create knowledge base.     4."""""",
""""""Note that the vector store itself is not deleted if you delete a knowledge base or data source resource.     Continue configuring your data source.     7. Choose either the default or customized chunking and parsing configurations.     a. If you choose custom settings, select one of the following chunking options:     ? Fixed-size chunking: Content split into chunks of text of your set approximate token size. You can set the maximum number of tokens that must not exceed for a chunk and the overlap percentage between consecutive chunks.     ? Default chunking: Content split into chunks of text of up to 300 tokens. If a single document or piece of content contains less than 300 tokens, the document is not further split.     ? Hierarchical chunking: Content organized into nested structures of parent-child chunks. You set the maximum parent chunk token size and the maximum child chunk token size. You also set the absolute number of overlap tokens between consecutive parent chunks and consecutive child chunks.     Create a data source 611Amazon Bedrock User Guide     ? Semantic chunking: Content organized into semantically similar text chunks or groups of sentences. You set the maximum number of sentences surrounding the target/current sentence to group together (buffer size). You also set the breakpoint percentile threshold for dividing the text into meaningful chunks. Semantic chunking uses a foundation model. View Amazon Bedrock pricing for information on the cost of foundation models.     ?"""""",
""""""Default chunking: Content split into chunks of text of up to 300 tokens. If a single document or piece of content contains less than 300 tokens, the document is not further split.     ? Hierarchical chunking: Content organized into nested structures of parent-child chunks. You set the maximum parent chunk token size and the maximum child chunk token size. You also set the absolute number of overlap tokens between consecutive parent chunks and consecutive child chunks.     ? Semantic chunking: Content organized into semantically similar text chunks or groups of sentences. You set the maximum number of sentences surrounding the target/current sentence to group together (buffer size). You also set the breakpoint percentile threshold for dividing the text into meaningful chunks. Semantic chunking uses a foundation model. View Amazon Bedrock pricing for information on the cost of foundation models.     Create a data source 630           https://learn.microsoft.com/en-us/azure/healthcare-apis/register-application         https://aws.amazon.com/bedrock/pricing/Amazon Bedrock User Guide     ? No chunking: Each document is treated as a single text chunk. You might want to pre-process your documents by splitting them into separate files.     Note     You can?t change the chunking strategy after you have created the data source.     b. You can choose to use Amazon Bedrock?s foundation model for parsing documents to parse more than standard text. You can parse tabular data within documents with their structure intact, for example."""""",
""""""No chunking: Each document is treated as a single text chunk. You might want to pre-process your documents by splitting them into separate files.     Note     You can?t change the chunking strategy after you have created the data source.     b. You can choose to use Amazon Bedrock?s foundation model for parsing documents to parse more than standard text. You can parse tabular data within documents with their structure intact, for example. View Amazon Bedrock pricing for information on the cost of foundation models.     c. You can choose to use an AWS Lambda function to customize your chunking strategy and how your document metadata attributes/fields are treated and ingested. Provide the Amazon S3 bucket location for the Lambda function input and output.     Go to the next section to configure your vector store.     10. Choose a model for converting your data into vector embeddings.     Create a data source 640           https://aws.amazon.com/bedrock/pricing/         https://aws.amazon.com/bedrock/pricing/Amazon Bedrock User Guide     Create a vector store to allow Amazon Bedrock to store, update, and manage embeddings. You can quick create a new vector store or select from a supported vector store you have created. Currently, only Amazon OpenSearch Serverless vector store is available to use with this data source. If you create a new vector store, an Amazon OpenSearch Serverless vector search collection and index with the required fields is set up for you."""""",
""""""No chunking: Each document is treated as a single text chunk. You might want to pre-process your documents by splitting them into separate files.     Note     You can?t change the chunking strategy after you have created the data source.     You can choose to use Amazon Bedrock?s foundation model for parsing documents to parse more than standard text. You can parse tabular data within documents with their structure intact, for example. View Amazon Bedrock pricing for information on the cost of foundation models.     You can choose to use an AWS Lambda function to customize your chunking strategy and how your document metadata attributes/fields are treated and ingested. Provide the Amazon S3 bucket location for the Lambda function input and output.     d. Select Next.     6. On the Select embeddings model and configure vector store page, choose a supported embeddings model to convert your data into vector embeddings for the knowledge base.     7. In the Vector store section, choose one of the following options to store the vector embeddings for your knowledge base:     ? Quick create a new vector store ? Amazon Bedrock creates an Amazon OpenSearch Serverless vector search collection for you. With this option, a public vector search collection and vector index is set up for you with the required fields and necessary configurations. After the collection is created, you can manage it in the Amazon OpenSearch Serverless console or through the AWS API."""""",
""""""You set the maximum number of sentences surrounding the target/current sentence to group together (buffer size). You also set the breakpoint percentile threshold for dividing the text into meaningful chunks. Semantic chunking uses a foundation model. View Amazon Bedrock pricing for information on the cost of foundation models.     ? No chunking: Each document is treated as a single text chunk. You might want to pre-process your documents by splitting them into separate files.     Note     You can?t change the chunking strategy after you have created the data source.     b. You can choose to use Amazon Bedrock?s foundation model for parsing documents to parse more than standard text. You can parse tabular data within documents with their structure intact, for example. View Amazon Bedrock pricing for information on the cost of foundation models.     c. You can choose to use an AWS Lambda function to customize your chunking strategy and how your document metadata attributes/fields are treated and ingested. Provide the Amazon S3 bucket location for the Lambda function input and output.     Go to the next section to configure your vector store.     12. Choose a model for converting your data into vector embeddings.     Create a data source 649           https://aws.amazon.com/bedrock/pricing/         https://aws.amazon.com/bedrock/pricing/Amazon Bedrock User Guide     Create a vector store to allow Amazon Bedrock to store, update, and manage embeddings."""""",
""""""The include and exclude patterns are crawled in accordance with your scope. If there's a conflict, the exclude pattern takes precedence.     11. Choose either the default or customized chunking and parsing configurations.     a. If you choose custom settings, select one of the following chunking options:     Create a data source 648           https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-ds.htmlAmazon Bedrock User Guide     ? Fixed-size chunking: Content split into chunks of text of your set approximate token size. You can set the maximum number of tokens that must not exceed for a chunk and the overlap percentage between consecutive chunks.     ? Default chunking: Content split into chunks of text of up to 300 tokens. If a single document or piece of content contains less than 300 tokens, the document is not further split.     ? Hierarchical chunking: Content organized into nested structures of parent-child chunks. You set the maximum parent chunk token size and the maximum child chunk token size. You also set the absolute number of overlap tokens between consecutive parent chunks and consecutive child chunks.     ? Semantic chunking: Content organized into semantically similar text chunks or groups of sentences. You set the maximum number of sentences surrounding the target/current sentence to group together (buffer size). You also set the breakpoint percentile threshold for dividing the text into meaningful chunks. Semantic chunking uses a foundation model."""""",
]"
"Give me a summary of the list of regions and models within the region, that support batch inference "," Based on the context provided, here's a summary of the regions and models that support batch inference:

1. US West (Oregon):
   - Meta Llama 3.1 8B Instruct
   - Meta Llama 3.1 70B Instruct
   - Meta Llama 3.1 405B Instruct
   - Meta Llama 3.2 1B Instruct
   - Meta Llama 3.2 3B Instruct
   - Meta Llama 3.2 11B Instruct
   - Meta Llama 3.2 90B Instruct
   - Mistral AI Mistral Large 2 (24.07)

2. US East (N. Virginia):
   - Mistral AI Mistral Small

3. Europe (Paris):
   - Meta Llama 3.2 1B Instruct
   - Meta Llama 3.2 3B Instruct

It's important to note that most other major AWS regions (such as Asia Pacific, Canada, Europe, and South America) are listed in the context, but they do not show support for these specific models for batch inference according to the information provided. ","[""""""Code examples for batch inference     Supported Regions and models for batch inference     The following list provides links to general information about regional and model support in Amazon Bedrock:     ? For a list of Region codes and endpoints supported in Amazon Bedrock, see Amazon Bedrock endpoints and quotas.     ? For a list of Amazon Bedrock model IDs to use when calling Amazon Bedrock API operations, see Amazon Bedrock model IDs.     The following table shows the AWS Regions and models that support batch inference:     Supported Regions and models 319           https://aws.amazon.com/bedrock/pricing/         https://docs.aws.amazon.com/general/latest/gr/bedrock.html         https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bedrock_region         https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bedrock_region         https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.htmlAmazon Bedrock User Guide     Model US East (N.""""""]"
What are the best practices to design effective prompts. Give me examples from different usecases ,"Based on the context provided, here are some best practices for designing effective prompts, along with examples from different use cases:

1. Provide simple, clear, and complete instructions:
   Example (Classification task):
   Good prompt: ""The most common cause of color blindness is an inherited problem or variation in the functionality of one or more of the three classes of cone cells in the retina, which mediate color vision. What is the above text about? a) biology b) history c) geology""
   This prompt clearly states the options, leading to a more accurate classification.

2. Place the question or instruction at the end of the prompt:
   Example (Open-book question-answering):
   ""Tensions increased after the 1911–1912 Italo-Turkish War demonstrated Ottoman weakness and led to the formation of the Balkan League, an alliance of Serbia, Bulgaria, Montenegro, and Greece. The League quickly overran most of the Ottomans' territory in the Balkans during the 1912–1913 First Balkan War, much to the surprise of outside observers. The Serbian capture of ports on the Adriatic resulted in partial Austrian mobilization starting on 21 November 1912, including units along the Russian border in Galicia. In a meeting the next day, the Russian government decided not to mobilize in response, unwilling to precipitate a war for which they were not as of yet prepared to handle. Which country captured ports?""
   Placing the question at the end helps the model focus on the relevant information.

3. Use separator characters for API calls:
   Example (Classification task):
   Prompt template: """"""{{Text}}
   {{Question}}
   {{Choice 1}} {{Choice 2}} {{Choice 3}}""""""
   Using separators like newline characters can improve model performance, especially for specific models like Anthropic Claude and Amazon Titan.

4. Use output indicators:
   Example (Summarization task):
   ""Charles Mingus Jr. was an American jazz upright bassist, pianist, composer, bandleader, and author. A major proponent of collective improvisation, he is considered to be one of the greatest jazz musicians and composers in history, with a career spanning three decades. Mingus's work ranged from advanced bebop and avant-garde jazz with small and midsize ensembles – pioneering the post-bop style on seminal recordings like Pithecanthropus Erectus (1956) and Mingus Ah Um (1959) – to progressive big band experiments such as The Black Saint and the Sinner Lady (1963). Please summarize the above text in one phrase.""
   Specifying the desired output format (one phrase) helps constrain the model's response.

5. Reduce ambiguity:
   Ensure that the prompt clearly describes the task expectations and reduces any potential misinterpretations by the model.

6. Use appropriate formatting:
   For specific models like Anthropic Claude, use the format: ""\n\n ","[""""""You also learn about how you can control a model's response by using inference parameters. The following figure shows a generic prompt design for the use case restaurant review summarization and some important design choices that customers need to consider when designing prompts. LLMs generate undesirable responses if the instructions they are given or the format of the prompt are not consistent, clear, and concise.     Design a prompt 349Amazon Bedrock User Guide     (Source: Prompt written by AWS)     The following content provides guidance on how to create successful prompts.     Topics     ? Provide simple, clear, and complete instructions     ? Place the question or instruction at the end of the prompt for best results     ? Use separator characters for API calls     ? Use output indicators     ? Best practices for good generalization     ? Optimize prompts for text models on Amazon Bedrock?when the basics aren't good enough     ? Control the model reponse with inference parameters inference parameters     Provide simple, clear, and complete instructions     LLMs on Amazon Bedrock work best with simple and straightforward instructions. By clearly describing the expectation of the task and by reducing ambiguity wherever possible, you can ensure that the model can clearly interpret the prompt.     Provide simple, clear, and complete instructions 350Amazon Bedrock User Guide     For example, consider a classification problem where the user wants an answer from a set of possible choices. The ?good? example shown below illustrates output that the user wants in this case. In the ?"""""",
""""""Also, this document is for general prompting guidelines. For model-specific guides, refer to their respective docs on Amazon Bedrock. This document provides a starting point. While the following example responses are generated using specific models on Amazon Bedrock, you can use other models in Amazon Bedrock to get results as well. The results may differ between models as each model has its own performance characteristics. The output that you generate using AI services     341           https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview         https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview         https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering         https://docs.ai21.com/docs/prompt-engineering         https://ai.meta.com/llama/get-started/#prompting         https://platform.stability.ai/docs/getting-started         https://docs.mistral.ai/guides/prompting_capabilities/Amazon Bedrock User Guide     is your content. Due to the nature of machine learning, output may not be unique across customers and the services may generate the same or similar results across customers.     Topics     ? What is a prompt?     ? What is prompt engineering?     ? Design a prompt     ? Prompt templates and examples for Amazon Bedrock text models     What is a prompt?"""""",
""""""348Amazon Bedrock User Guide     ? Open-ended text generation: Given a prompt, the model must respond with a passage of original text that matches the description. This also includes the generation of creative text such as stories, poems, or movie scripts.     ? Code generation: The model must generate code based on user specifications. For example, a prompt could request text-to-SQL or Python code generation.     ? Mathematics: The input describes a problem that requires mathematical reasoning at some level, which may be numerical, logical, geometric or otherwise.     ? Reasoning or logical thinking: The model must make a series of logical deductions.     ? Entity extraction: Entity extraction can extracts entities based on a provided input question. You can extract specific entities from text or input based on your prompt.     ? Chain-of-thought reasoning: Give step-by-step reasoning on how an answer is derived based on your prompt.     Design a prompt     Designing an appropriate prompt is an important step towards building a successful application using Amazon Bedrock models. In this section, you learn how to design a prompt that is consistent, clear, and concise. You also learn about how you can control a model's response by using inference parameters. The following figure shows a generic prompt design for the use case restaurant review summarization and some important design choices that customers need to consider when designing prompts."""""",
""""""Design a prompt     ? Prompt templates and examples for Amazon Bedrock text models     What is a prompt?     Prompts are a specific set of inputs provided by you, the user, that guide LLMs on Amazon Bedrock to generate an appropriate response or output for a given task or instruction.     User Prompt: Who invented the airplane?     When queried by this prompt, Titan provides an output:     Output: The Wright brothers, Orville and Wilbur Wright are widely credited with inventing and manufacturing the world's first successful airplane.     (Source of prompt: AWS, model used: Amazon Titan Text)     Topics     ? Components of a prompt     ? Few-shot prompting vs. zero-shot prompting     ? Prompt template     ? Maintain recall over Amazon Bedrock inference requests     Components of a prompt     A single prompt includes several components, such as the task or instruction you want the LLMs to perform, the context of the task (for example, a description of the relevant domain),     What is a prompt? 342Amazon Bedrock User Guide     demonstration examples, and the input text that you want LLMs on Amazon Bedrock to use in its response. Depending on your use case, the availability of the data, and the task, your prompt should combine one or more of these components.     Consider this example prompt asking Titan to summarize a review:     User Prompt: The following is text from a restaurant review:     ?"""""",
""""""It also covers tools to help you find the best possible prompt format for your use case when using a LLM on Amazon Bedrock.     Note     All examples in this doc are obtained via API calls. The response may vary due to the stochastic nature of the LLM generation process. If not otherwise specified, the prompts are written by employees of AWS.     Amazon Bedrock includes models from a variety of providers. The following is a list prompt engineering guidelines for those models.     ? Anthropic Claude model prompt guide: https://docs.anthropic.com/en/docs/build-with- claude/prompt-engineering/overview     ? Cohere prompt guide: https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering     ? AI21 Labs Jurassic model prompt guide: https://docs.ai21.com/docs/prompt-engineering     ? Meta Llama 2 prompt guide: https://ai.meta.com/llama/get-started/#prompting     ? Stability AI prompt guide: https://platform.stability.ai/docs/getting-started     ? Mistral AI prompt guide: https://docs.mistral.ai/guides/prompting_capabilities/     Disclaimer: The examples in this document use the current text models available within Amazon Bedrock. Also, this document is for general prompting guidelines. For model-specific guides, refer to their respective docs on Amazon Bedrock. This document provides a starting point."""""",
""""""In other words, prompt engineering is the art of communicating with an LLM. High-quality prompts condition the LLM to generate desired or better responses. The detailed guidance provided within this document is applicable across all LLMs within Amazon Bedrock.     The best prompt engineering approach for your use case is dependent on both the task and the data. Common tasks supported by LLMs on Amazon Bedrock include:     ? Classification: The prompt includes a question with several possible choices for the answer, and the model must respond with the correct choice. An example classification use case is sentiment analysis: the input is a text passage, and the model must classify the sentiment of the text, such as whether it's positive or negative, or harmless or toxic.     ? Question-answer, without context: The model must answer the question with its internal knowledge without any context or document.     ? Question-answer, with context: The user provides an input text with a question, and the model must answer the question based on information provided within the input text.     ? Summarization: The prompt is a passage of text, and the model must respond with a shorter passage that captures the main points of the input.     What is prompt engineering? 348Amazon Bedrock User Guide     ? Open-ended text generation: Given a prompt, the model must respond with a passage of original text that matches the description."""""",
""""""); }     }     Code examples 340Amazon Bedrock User Guide     Prompt engineering concepts     Prompt engineering refers to the practice of optimizing textual input to a Large Language Model (LLM) to obtain desired responses. Prompting helps a LLM perform a wide variety of tasks, including classification, question answering, code generation, creative writing, and more. The quality of prompts that you provide to a LLM can impact the quality of the model's responses. This section provides you the necessary information to get started with prompt engineering. It also covers tools to help you find the best possible prompt format for your use case when using a LLM on Amazon Bedrock.     Note     All examples in this doc are obtained via API calls. The response may vary due to the stochastic nature of the LLM generation process. If not otherwise specified, the prompts are written by employees of AWS.     Amazon Bedrock includes models from a variety of providers. The following is a list prompt engineering guidelines for those models.     ? Anthropic Claude model prompt guide: https://docs.anthropic.com/en/docs/build-with- claude/prompt-engineering/overview     ? Cohere prompt guide: https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering     ? AI21 Labs Jurassic model prompt guide: https://docs.ai21.com/docs/prompt-engineering     ?"""""",
""""""Orchestration     ? Knowledge base response generation     ? Post-processing (disabled by default)     Prompt templates define how the agent does the following:     ? Processes user input text and output prompts from foundation models (FMs)     ? Orchestrates between the FM, action groups, and knowledge bases     ? Formats and returns responses to the user     By using advanced prompts, you can enhance your agent's accuracy through modifying these prompt templates to provide detailed configurations. You can also provide hand-curated examples for few-shot prompting, in which you improve model performance by providing labeled examples for a specific task.     Select a topic to learn more about advanced prompts.     Topics     ? Advanced prompts terminology     ? Advanced prompt templates     ? Configure advanced prompts     ? Use placeholder variables in Amazon Bedrock agent prompt templates     ? Modify parser Lambda function in Amazon Bedrock Agents     Enhance your agent's accuracy using advanced prompt templates 795Amazon Bedrock User Guide     Advanced prompts terminology     The following terminology is helpful in understanding how advanced prompts work.     ? Session ? A group of InvokeAgent requests made to the same agent with the same session ID. When you make an InvokeAgent request, you can reuse a sessionId that was returned from the response of a previous call in order to continue the same session with an agent. As long as the idleSessionTTLInSeconds time in the Agent configuration hasn't expired, you maintain the same session with the agent.     ? Turn ? A single InvokeAgent call."""""",
""""""A strong, descriptive prompt that clearly defines elements, colors, and subjects will lead to better results."""""",
""""""Base prompt template ? The structural elements that make up a prompt. The template consists of placeholders that are filled in with user input, the agent configuration, and context at runtime to create a prompt for the foundation model to process when the agent reaches that step. For more information about these placeholders, see Use placeholder variables in Amazon Bedrock agent prompt templates). With advanced prompts, you can edit these templates.     Advanced prompt templates     With advanced prompts, you can do the following:     ? Edit the default base prompt templates that the agent uses. By overriding the logic with your own configurations, you can customize your agent's behavior.     ? Configure their inference parameters.     ? Turn on or turn off invocation for different steps in the agent sequence.     Enhance your agent's accuracy using advanced prompt templates 796           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_Agent.htmlAmazon Bedrock User Guide     For each step of the agent sequence, you can edit the following parts:     Prompt template     Describes how the agent should evaluate and use the prompt that it receives at the step for which you're editing the template. Note the following differences depending on the model that you're using:     ? If you're using Anthropic Claude Instant, Claude v2.0, or Claude v2.1, the prompt templates must be raw text.     ?"""""",

]"
Show me some prompt template examples for common tasks supported by bedrock text models ,"Here are some prompt template examples for common tasks supported by Amazon Bedrock text models:

Text classification:
""Tell me the sentiment of the following {{Text Type, e.g., ""restaurant review""}} and categorize it as either {{Sentiment A}} or {{Sentiment B}}. Here are some examples:

Text: {{Example Input 1}} Answer: {{Sentiment A}}
Text: {{Example Input 2}} Answer: {{Sentiment B}}
Text: {{Input}} Answer:""

Question-answer, without context:
""{{Question}}
Answer:""

Question-answer, with context:
""{{Context}}
{{Question}}
Answer:""

Summarization:
""Summarize the following text in 3 sentences:
{{Text}}
Summary:"" ","[""""""There are also model-specific inference parameters. Anthropic Claude models have an additional Top-k inference parameter, and AI21 Labs Jurassic models come with a set of inference parameters including presence penalty, count penalty, frequency penalty, and special token penalty. For more information, refer to their respective documentation.     Prompt templates and examples for Amazon Bedrock text models     Common tasks supported by LLMs on Amazon Bedrock include text classification, summarization, and questions and answers (with and without context). For these tasks, you can use the following templates and examples to help you create prompts for Amazon Bedrock text models.     Topics     ? Text classification     ? Question-answer, without context     ? Question-answer, with context     ? Summarization     ? Text generation     ? Code generation     ? Mathematics     ? Reasoning/logical thinking     ? Entity extraction     ? Chain-of-thought reasoning     Prompt templates and examples for Amazon Bedrock text models 361Amazon Bedrock User Guide     Text classification     For text classification, the prompt includes a question with several possible choices for the answer, and the model must respond with the correct choice. Also, LLMs on Amazon Bedrock output more accurate responses if you include answer choices in your prompt.     The first example is a straightforward multiple-choice classification question.     Prompt template for Titan ""{{Text}} {{Question}}?"""""",
""""""It also covers tools to help you find the best possible prompt format for your use case when using a LLM on Amazon Bedrock.     Note     All examples in this doc are obtained via API calls. The response may vary due to the stochastic nature of the LLM generation process. If not otherwise specified, the prompts are written by employees of AWS.     Amazon Bedrock includes models from a variety of providers. The following is a list prompt engineering guidelines for those models.     ? Anthropic Claude model prompt guide: https://docs.anthropic.com/en/docs/build-with- claude/prompt-engineering/overview     ? Cohere prompt guide: https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering     ? AI21 Labs Jurassic model prompt guide: https://docs.ai21.com/docs/prompt-engineering     ? Meta Llama 2 prompt guide: https://ai.meta.com/llama/get-started/#prompting     ? Stability AI prompt guide: https://platform.stability.ai/docs/getting-started     ? Mistral AI prompt guide: https://docs.mistral.ai/guides/prompting_capabilities/     Disclaimer: The examples in this document use the current text models available within Amazon Bedrock. Also, this document is for general prompting guidelines. For model-specific guides, refer to their respective docs on Amazon Bedrock. This document provides a starting point."""""",
""""""Also, this document is for general prompting guidelines. For model-specific guides, refer to their respective docs on Amazon Bedrock. This document provides a starting point. While the following example responses are generated using specific models on Amazon Bedrock, you can use other models in Amazon Bedrock to get results as well. The results may differ between models as each model has its own performance characteristics. The output that you generate using AI services     341           https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview         https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview         https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering         https://docs.ai21.com/docs/prompt-engineering         https://ai.meta.com/llama/get-started/#prompting         https://platform.stability.ai/docs/getting-started         https://docs.mistral.ai/guides/prompting_capabilities/Amazon Bedrock User Guide     is your content. Due to the nature of machine learning, output may not be unique across customers and the services may generate the same or similar results across customers.     Topics     ? What is a prompt?     ? What is prompt engineering?     ? Design a prompt     ? Prompt templates and examples for Amazon Bedrock text models     What is a prompt?"""""",
""""""Base prompt template ? The structural elements that make up a prompt. The template consists of placeholders that are filled in with user input, the agent configuration, and context at runtime to create a prompt for the foundation model to process when the agent reaches that step. For more information about these placeholders, see Use placeholder variables in Amazon Bedrock agent prompt templates). With advanced prompts, you can edit these templates.     Advanced prompt templates     With advanced prompts, you can do the following:     ? Edit the default base prompt templates that the agent uses. By overriding the logic with your own configurations, you can customize your agent's behavior.     ? Configure their inference parameters.     ? Turn on or turn off invocation for different steps in the agent sequence.     Enhance your agent's accuracy using advanced prompt templates 796           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html         https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_Agent.htmlAmazon Bedrock User Guide     For each step of the agent sequence, you can edit the following parts:     Prompt template     Describes how the agent should evaluate and use the prompt that it receives at the step for which you're editing the template. Note the following differences depending on the model that you're using:     ? If you're using Anthropic Claude Instant, Claude v2.0, or Claude v2.1, the prompt templates must be raw text.     ?"""""",
""""""348Amazon Bedrock User Guide     ? Open-ended text generation: Given a prompt, the model must respond with a passage of original text that matches the description. This also includes the generation of creative text such as stories, poems, or movie scripts.     ? Code generation: The model must generate code based on user specifications. For example, a prompt could request text-to-SQL or Python code generation.     ? Mathematics: The input describes a problem that requires mathematical reasoning at some level, which may be numerical, logical, geometric or otherwise.     ? Reasoning or logical thinking: The model must make a series of logical deductions.     ? Entity extraction: Entity extraction can extracts entities based on a provided input question. You can extract specific entities from text or input based on your prompt.     ? Chain-of-thought reasoning: Give step-by-step reasoning on how an answer is derived based on your prompt.     Design a prompt     Designing an appropriate prompt is an important step towards building a successful application using Amazon Bedrock models. In this section, you learn how to design a prompt that is consistent, clear, and concise. You also learn about how you can control a model's response by using inference parameters. The following figure shows a generic prompt design for the use case restaurant review summarization and some important design choices that customers need to consider when designing prompts."""""",
""""""Orchestration     ? Knowledge base response generation     ? Post-processing (disabled by default)     Prompt templates define how the agent does the following:     ? Processes user input text and output prompts from foundation models (FMs)     ? Orchestrates between the FM, action groups, and knowledge bases     ? Formats and returns responses to the user     By using advanced prompts, you can enhance your agent's accuracy through modifying these prompt templates to provide detailed configurations. You can also provide hand-curated examples for few-shot prompting, in which you improve model performance by providing labeled examples for a specific task.     Select a topic to learn more about advanced prompts.     Topics     ? Advanced prompts terminology     ? Advanced prompt templates     ? Configure advanced prompts     ? Use placeholder variables in Amazon Bedrock agent prompt templates     ? Modify parser Lambda function in Amazon Bedrock Agents     Enhance your agent's accuracy using advanced prompt templates 795Amazon Bedrock User Guide     Advanced prompts terminology     The following terminology is helpful in understanding how advanced prompts work.     ? Session ? A group of InvokeAgent requests made to the same agent with the same session ID. When you make an InvokeAgent request, you can reuse a sessionId that was returned from the response of a previous call in order to continue the same session with an agent. As long as the idleSessionTTLInSeconds time in the Agent configuration hasn't expired, you maintain the same session with the agent.     ? Turn ? A single InvokeAgent call."""""",
""""""Enter TEXT (currently, only text prompts are supported).     ? templateConfiguration ? The text field maps to a TextPromptTemplateConfiguration. Fill out the following fields in it:     ? text ? The message for the prompt. Enclose variables in double curly braces: {{variable}}.     Create a prompt 392           https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_TextPromptTemplateConfiguration.html.htmlAmazon Bedrock User Guide     ? inputVariables ? For each object in the list, enter each variable that you created in the name field.     ? inferenceConfiguration ? The text field maps to a PromptModelInferenceConfiguration. To learn more about inference parameters, see Influence response generation with inference parameters.     ? metadata ? Metadata to associate with the prompt variant. You can append key-value pairs to the array to tag the prompt variant with metadata.     The following fields are optional:     Field Use case     description To provide a description for the prompt.     clientToken To ensure the API request completes only once. For more information, see Ensuring idempotency.     tags To associate tags with the flow. For more information, see Tagging Amazon Bedrock resources.     customerEncryptionKeyArn To encrypt the flow with a KMS key. For more information, see Key policy to allow Amazon Bedrock to encrypt and decrypt a flow."""""",
""""""using advanced prompt templates 868Amazon Bedrock User Guide      \'rationale\': rationale } } # Check if there is a final answer try: final_answer, generated_response_parts = parse_answer(sanitized_response) except ValueError as e: addRepromptResponse(parsed_response, e) return parsed_response if final_answer: parsed_response[\'orchestrationParsedResponse\'][\'responseDetails\'] = { \'invocationType\': \'FINISH\', \'agentFinalResponse\': { \'responseText\': final_answer } } if generated_response_parts: parsed_response[\'orchestrationParsedResponse\'][\'responseDetails\'] [\'agentFinalResponse\'][\'citations\'] = { \'generatedResponseParts\': generated_response_parts } logger.info(""Final answer parsed response: "" + str(parsed_response)) return parsed_response # Check if there is an ask user try: ask_user = parse_ask_user(sanitized_response) if ask_user: parsed_response[\'orchestrationParsedResponse\'][\'responseDetails\'] = { \'invocationType\': \'ASK_USER\', \'agentAskUser\': { \'responseText\': ask_user } } logger.info(""Ask user parsed response: "" + str(parsed_response)) return parsed_response except ValueError as e: addRepromptResponse(parsed_response, e) return parsed_response     Enhance your agent\'s accuracy using advanced prompt templates 869Amazon Bedrock User Guide      # Check if there is an agent action try: parsed_response = parse_function_call(sanitized_response, parsed_response) logger.info(""Function call parsed"""""",
]"