Documents Product Categories Reka Flash On Premise

Reka Flash On Premise

Nov 13, 2024
=2.0.0" . You can then use your API key to query the models: 1 from reka.client import Reka 2 3 # You can also set the API key using the REKA_API_KEY environment variable. 4 client = Reka(api_key="YOUR_API_KEY") 5 6 response = client.chat.create( 7 messages=[ 8 { 9 "content": "What is the fifth prime number?", 10 "role": "user", 11 } 12 ], 13 model="reka-core-20240501", 14 ) 15 print(response.responses[0].message.content)This will print a response like: The fifth prime number is 11. Here’s a quick breakdown of the first five prime numbers in order: 2, 3, 5, 7, 11. Was this page helpful? Yes No Chat Up Next Built withChat Our models can be accessed through the chat API. This page gives an introduction to using the chat API via the Python SDK. Single Turn Prompting A simple single turn request can be made as follows: 1 from reka.client import Reka 2 3 # You can also set the API key using the REKA_API_KEY environment variable. 4 client = Reka(api_key="YOUR_API_KEY") 5 6 response = client.chat.create( 7 messages=[ 8 { 9 "content": "Write a python one-liner to flatten a list of lists.", 10 "role": "user", 11 } 12 ], 13 model="reka-core-20240501", 14 ) 15 print(response.responses[0].message.content) This will return a response like: Here''s a Python one-liner using list comprehension to flatten a list of lists: ```python flattened_list = [item for sublist in nested_list for item in sublist] ```Replace `nested_list` with your actual list of lists. This one-liner works by iter See Available Models for details on valid model names. Multiple Turn Conversations You can request a response to a multiple turn conversation by adding more messages in the history. For example: 1 from reka.client import Reka 2 3 client = Reka(api_key="YOUR_API_KEY") 4 5 response = client.chat.create( 6 messages=[ 7 { 8 "content": "My name is Matt.", 9 "role": "user", 10 }, 11 { 12 "content": "Hello Matt! How can I help you today?", 13 "role": "assistant", 14 }, 15 { 16 "content": "Can you think of a couple of famous people with the sa 17 "role": "user", 18 }, 19 ], 20 model="reka-core-20240501", 21 ) This will return a response like: Certainly, Matt is a popular name, and there are several famous individuals with 1. **Matt Damon** - An acclaimed actor known for his roles in movies like "Good Wi 2. **Matt LeBlanc** - An actor best known for playing Joey Tribbiani on the televi 3. **Matt Groening** - The creator of the iconic animated television series "The S 4. **Matt Smith** - An actor who played the Eleventh Doctor in the British televis 5. **Matt Bomer** - An actor, producer, and director known for his roles in "White 6. **Matt Ryan** - An actor known for his role as John Constantine in the televisiThese are just a few examples of the many famous Matts out there. Each has made si Assistant Completions We support guiding the assistant output (e.g. prompting it to output a structured JSON response), by allowing the developer to specify how the assistant response should start. This is done by adding a partial assistant response as the last message: 1 from reka.client import Reka 2 3 client = Reka(api_key="YOUR_API_KEY") 4 5 prompt = """ 6 Below is a paragraph from wikipedia: 7 8 The Solar System is the gravitationally bound system of the Sun and the object 9 The largest of such objects are the eight planets, in order from the Sun: four 10 Venus, Earth and Mars, two gas giants named Jupiter and Saturn, and two ice gi 11 The terrestrial planets have a definite surface and are mostly made of rock an 12 mostly made of hydrogen and helium, while the ice giants are mostly made of ''v 13 ammonia, and methane. In some texts, these terrestrial and giant planets are c 14 Solar System planets respectively. 15 16 Extract information about the planets from this paragraph as a JSON list of ob 17 ''composition''. The ''composition'' key should contain one or two words, and ther 18 """.strip() 19 20 json_prefix = """ 21 [ This will output: 1 [ 2 { 3 "planetName": "Mercury", 4 "composition": "rock, metal" 5 }, 6 { 7 "planetName": "Venus", 8 "composition": "rock, metal" 9 },10 { 11 "planetName": "Earth", 12 "composition": "rock, metal" 13 }, 14 { 15 "planetName": "Mars", 16 "composition": "rock, metal" 17 }, 18 { 19 "planetName": "Jupiter", 20 "composition": "hydrogen, helium" 21 } Useful Parameters The parameters of the chat API are fully documented in the API reference, but some particularly useful parameters are listed below: temperature: Typically between 0 and 1. Values close to 0 will result in less varied generations, and higher values will result in more variation and creativity. max_tokens: The maximum number of tokens that should be returned. Increase this if generations are being truncated, i.e. the finish_reason in the response is "length" . stop: A list of strings that should stop the generation. This can be used to stop after generating a code block, when reaching a certain number in a list etc. Streaming The chat API supports streaming with the chat_stream function in the Python SDK, or by setting stream to true in the HTTP API. Below is an example of streaming in Python: 1 from reka.client import Reka 2 3 client = Reka(api_key="YOUR_API_KEY") 4 5 response = client.chat.create_stream( 6 messages=[ 7 ChatMessage( 8 content="Write a detailed template NDA contract between two partie9 role="user", 10 ) 11 ], 12 max_tokens=2048, 13 model="reka-core-20240501", 14 ) 15 16 for chunk in response: 17 print(chunk.responses[0].chunk.content) Async The Python SDK also exports an async client so that you can make non-blocking calls to our API. This can be useful to make batch requests. The following code illustrates how to batch calls to the API, by creating a list of async tasks, and gathering them with asyncio.gather . The Semaphore limits the number of concurrent requests to the API. 1 import asyncio 2 3 from reka.client import AsyncReka 4 5 6 client = AsyncReka(api_key="YOUR_API_KEY") 7 max_concurrent_requests = 2 8 semaphore = asyncio.Semaphore(max_concurrent_requests) 9 10 11 async def respond(prompt: str) -> str: 12 async with semaphore: 13 response = await client.chat.create( 14 messages=[ 15 { 16 "content": prompt, 17 "role": "user", 18 } 19 ], 20 model="reka-flash", 21 ) Was this page helpful? Yes NoImage, Video, and Audio Chat The chat API supports multimodal inputs, including images, videos, and audio. You can insert multimodal content in the conversation by using media content types. The supported types are: image_url , video_url , audio_url , and pdf_url . Below is an example of sending an image of a cat by URL: 1 from reka import ChatMessage 2 from reka.client import Reka 3 4 client = Reka() 5 response = client.chat.create( 6 messages=[ 7 ChatMessage( 8 content=[ 9 {"type": "image_url", "image_url": "https://v0.docs.reka.ai/_i 10 {"type": "text", "text": "What animal is this? Answer briefly" 11 ], 12 role="user",13 ) 14 ], 15 model="reka-core-20240501", 16 ) 17 print(response.responses[0].message.content) This will output a response like: The animal in the image is a domestic cat. Specifically, it appears to be a ginger or orange tabby cat, which is characterized by its reddish-brown fur with darker stripes or patches. The cat is engaging in a common feline behavior of sniffing or licking objects, which in this case is a computer keyboard. Cats are known for their curiosity and often explore their environment by using their sense of smell, which is highly developed. The act of licking or sniffing can also be a way for cats to mark their territory with pheromones from their saliva. Data URLs The API supports sending media via data URLs, for example you could URL-encode a jpeg image and then set image_url to a value like "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAASABIAAD/4QmoRXhpZgAATU0AKgAAAAgADQEPAA IAA..." . Multiple media You can send multiple media files in your request by appending them to a content for a user like this: 1 response = client.chat.create( 2 messages=[ 3 ChatMessage( 4 content=[ 5 {"type": "image_url", "image_url": "https://example.com/image_ 6 {"type": "image_url", "image_url": "https://example.com/image_ 7 {"type": "text", "text": "What colours and shapes are present 8 ], 9 role="user",10 ) 11 ], 12 model="reka-core-20240501", 13 ) Streaming, Async, and other advanced usage Please see the guide for text-only chat for more guidance on the advanced features of the chat API, which also work for multimodal inputs. Was this page helpful? Yes No Available Models Up Next Built withAvailable Models Our baseline models always available for public access are: model name reka-edge reka-flash reka-core Other models may be available. The Get Models API allows you to list what models you have available to you. Using the Python SDK, it can be accessed as follows: 1 from reka.client import Reka 2 3 client = Reka() 4 print(client.models.get()) This will give output like: 1 [ 2 Model(id=''reka-core''), 3 Model(id=''reka-core-20240415''), 4 Model(id=''reka-core-20240501''), 5 Model(id=''reka-flash''), 6 Model(id=''reka-flash-20240226''), 7 Model(id=''reka-edge''), 8 Model(id=''reka-edge-20240208''), 9 ] All of these model IDs can be used as values for the chat API’s model parameter.API Versioning API version Python SDK HTTP endpoint Documentation v0 reka-api <= 2.0.0 https://api.reka.ai/ https://v0.docs.reka.ai/ v1 reka-api > 2.0.0 https://api.reka.ai/v1 https://docs.reka.ai/ (here) Was this page helpful? Yes No Chat Up Next Built with">
To view the full page, please visit: Reka Flash On Premise Product Userguide

Reka Flash On Premise

Multimodal AI you can deploy anywhere. Next-generation AI models to empower agents that can read, see, hear, and speak.
Buy now