# google.generativeai.ChatSession
Contains an ongoing conversation with the model.
google.generativeai.ChatSession(
model: GenerativeModel,
history: (Iterable[content_types.StrictContentType] | None) = None,
enable_automatic_function_calling: bool = False
)
```
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
>>> response = chat.send_message("Hello again")
>>> print(response.text)
>>> response = chat.send_message(...
```
This `ChatSession` object collects the messages sent and received, in its
ChatSession.history
attribute.
Arguments |
`model`
|
The model to use in the chat.
|
`history`
|
A chat history to initialize the object with.
|
Attributes |
`history`
|
The chat history.
|
`last`
|
returns the last received `genai.GenerateContentResponse`
|
## Methods
rewind
View source
rewind() -> tuple[protos.Content, protos.Content]
Removes the last request/response pair from the chat history.
send_message
View source
send_message(
content: content_types.ContentType,
*,
generation_config: generation_types.GenerationConfigType = None,
safety_settings: safety_types.SafetySettingOptions = None,
stream: bool = False,
tools: (content_types.FunctionLibraryType | None) = None,
tool_config: (content_types.ToolConfigType | None) = None,
request_options: (helper_types.RequestOptionsType | None) = None
) -> generation_types.GenerateContentResponse
Sends the conversation history with the added message and returns the model's response.
Appends the request and response to the conversation history.
```
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
"Hello! How can I assist you today?"
>>> len(chat.history)
2
```
Call it with `stream=True` to receive response chunks as they are generated:
```
>>> chat = model.start_chat()
>>> response = chat.send_message("Explain quantum physics", stream=True)
>>> for chunk in response:
... print(chunk.text, end='')
```
Once iteration over chunks is complete, the `response` and `ChatSession` are in states identical to the
`stream=False` case. Some properties are not available until iteration is complete.
Like GenerativeModel.generate_content
this method lets you override the model's `generation_config` and
`safety_settings`.
Arguments |
`content`
|
The message contents.
|
`generation_config`
|
Overrides for the model's generation config.
|
`safety_settings`
|
Overrides for the model's safety settings.
|
`stream`
|
If True, yield response chunks as they are generated.
|
send_message_async
View source
send_message_async(
content,
*,
generation_config=None,
safety_settings=None,
stream=False,
tools=None,
tool_config=None,
request_options=None
)
The async version of ChatSession.send_message
.