# google.generativeai.GenerativeModel
The `genai.GenerativeModel` class wraps default parameters for calls to GenerativeModel.generate_content
, GenerativeModel.count_tokens
, and GenerativeModel.start_chat
.
google.generativeai.GenerativeModel(
model_name: str = 'gemini-1.5-flash-002',
safety_settings: (safety_types.SafetySettingOptions | None) = None,
generation_config: (generation_types.GenerationConfigType | None) = None,
tools: (content_types.FunctionLibraryType | None) = None,
tool_config: (content_types.ToolConfigType | None) = None,
system_instruction: (content_types.ContentType | None) = None
)
This family of functionality is designed to support multi-turn conversations, and multimodal
requests. What media-types are supported for input and output is model-dependant.
```
>>> import google.generativeai as genai
>>> import PIL.Image
>>> genai.configure(api_key='YOUR_API_KEY')
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> result = model.generate_content('Tell me a story about a magic backpack')
>>> result.text
"In the quaint little town of Lakeside, there lived a young girl named Lily..."
```
#### Multimodal input:
```
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> result = model.generate_content([
... "Give me a recipe for these:", PIL.Image.open('scones.jpeg')])
>>> result.text
"**Blueberry Scones** ..."
```
Multi-turn conversation:
```
>>> chat = model.start_chat()
>>> response = chat.send_message("Hi, I have some questions for you.")
>>> response.text
"Sure, I'll do my best to answer your questions..."
```
To list the compatible model names use:
```
>>> for m in genai.list_models():
... if 'generateContent' in m.supported_generation_methods:
... print(m.name)
```
Arguments |
`model_name`
|
The name of the model to query. To list compatible models use
|
`safety_settings`
|
Sets the default safety filters. This controls which content is blocked
by the api before being returned.
|
`generation_config`
|
A `genai.GenerationConfig` setting the default generation parameters to
use.
|
Attributes |
`cached_content`
|
|
`model_name`
|
|
## Methods
count_tokens
View source
count_tokens(
contents: content_types.ContentsType = None,
*,
generation_config: (generation_types.GenerationConfigType | None) = None,
safety_settings: (safety_types.SafetySettingOptions | None) = None,
tools: (content_types.FunctionLibraryType | None) = None,
tool_config: (content_types.ToolConfigType | None) = None,
request_options: (helper_types.RequestOptionsType | None) = None
) -> protos.CountTokensResponse
count_tokens_async
View source
count_tokens_async(
contents=None,
*,
generation_config=None,
safety_settings=None,
tools=None,
tool_config=None,
request_options=None
)
from_cached_content
View source
@classmethod
from_cached_content(
cached_content: (str | caching.CachedContent),
*,
generation_config: (generation_types.GenerationConfigType | None) = None,
safety_settings: (safety_types.SafetySettingOptions | None) = None
) -> GenerativeModel
Creates a model with `cached_content` as model's context.
Args |
`cached_content`
|
context for the model.
|
`generation_config`
|
Overrides for the model's generation config.
|
`safety_settings`
|
Overrides for the model's safety settings.
|
Returns |
`GenerativeModel` object with `cached_content` as its context.
|
generate_content
View source
generate_content(
contents: content_types.ContentsType,
*,
generation_config: (generation_types.GenerationConfigType | None) = None,
safety_settings: (safety_types.SafetySettingOptions | None) = None,
stream: bool = False,
tools: (content_types.FunctionLibraryType | None) = None,
tool_config: (content_types.ToolConfigType | None) = None,
request_options: (helper_types.RequestOptionsType | None) = None
) -> generation_types.GenerateContentResponse
A multipurpose function to generate responses from the model.
This GenerativeModel.generate_content
method can handle multimodal input, and multi-turn
conversations.
```
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> response = model.generate_content('Tell me a story about a magic backpack')
>>> response.text
```
### Streaming
This method supports streaming with the `stream=True`. The result has the same type as the non streaming case,
but you can iterate over the response chunks as they become available:
```
>>> response = model.generate_content('Tell me a story about a magic backpack', stream=True)
>>> for chunk in response:
... print(chunk.text)
```
### Multi-turn
This method supports multi-turn chats but is **stateless**: the entire conversation history needs to be sent with each
request. This takes some manual management but gives you complete control:
```
>>> messages = [{'role':'user', 'parts': ['hello']}]
>>> response = model.generate_content(messages) # "Hello, how can I help"
>>> messages.append(response.candidates[0].content)
>>> messages.append({'role':'user', 'parts': ['How does quantum physics work?']})
>>> response = model.generate_content(messages)
```
For a simpler multi-turn interface see GenerativeModel.start_chat
.
### Input type flexibility
While the underlying API strictly expects a `list[protos.Content]` objects, this method
will convert the user input into the correct type. The hierarchy of types that can be
converted is below. Any of these objects can be passed as an equivalent `dict`.
* `Iterable[protos.Content]`
* protos.Content
* `Iterable[protos.Part]`
* protos.Part
* `str`, `Image`, or protos.Blob
In an `Iterable[protos.Content]` each `content` is a separate message.
But note that an `Iterable[protos.Part]` is taken as the parts of a single message.
Arguments |
`contents`
|
The contents serving as the model's prompt.
|
`generation_config`
|
Overrides for the model's generation config.
|
`safety_settings`
|
Overrides for the model's safety settings.
|
`stream`
|
If True, yield response chunks as they are generated.
|
`tools`
|
`protos.Tools` more info coming soon.
|
`request_options`
|
Options for the request.
|
generate_content_async
View source
generate_content_async(
contents,
*,
generation_config=None,
safety_settings=None,
stream=False,
tools=None,
tool_config=None,
request_options=None
)
The async version of GenerativeModel.generate_content
.
start_chat
View source
start_chat(
*,
history: (Iterable[content_types.StrictContentType] | None) = None,
enable_automatic_function_calling: bool = False
) -> ChatSession
Returns a `genai.ChatSession` attached to this model.
```
>>> model = genai.GenerativeModel()
>>> chat = model.start_chat(history=[...])
>>> response = chat.send_message("Hello?")
```
Arguments |
`history`
|
An iterable of protos.Content objects, or equivalents to initialize the session.
|