Skip to content

feat: Add agent shared config and refactor react agent to module#148

Open
fede-kamel wants to merge 6 commits intooracle:mainfrom
fede-kamel:feature/deep-agent-support
Open

feat: Add agent shared config and refactor react agent to module#148
fede-kamel wants to merge 6 commits intooracle:mainfrom
fede-kamel:feature/deep-agent-support

Conversation

@fede-kamel
Copy link
Copy Markdown
Contributor

@fede-kamel fede-kamel commented Feb 26, 2026

Summary

Introduce shared agent infrastructure as the foundation for the deep research agent stack.

  • Add AgentConfig base class and _build_llm() shared builder in agents/common.py — reusable config for all OCI agent types
  • Refactor react agent from single file (react.py) to module (react/) using the shared config
  • Add _compat.py with langgraph schema compatibility patches for upstream OmitFromSchema/NotRequired annotation bugs
  • Fix tool schema conversion to use tool_call_schema instead of args_schema for deepagents middleware compatibility
  • Add supports_tool_choice to GenericProvider for structured output
  • Fix multi-turn tool call counting to scope to current turn only
  • Add configurable request timeout via auth.py

What Changed

File Change
agents/common.py New — AgentConfig, _build_llm, _get_agent_factory, _filter_none
agents/react.py Deleted — replaced by module
agents/react/__init__.py New — re-exports create_oci_agent
agents/react/agent.py New — create_oci_agent refactored to use AgentConfig
agents/__init__.py Updated — exports AgentConfig
_compat.py New — langgraph schema patches
chat_models/providers/base.py New — _overlay_schema_extras, supports_tool_choice
chat_models/providers/cohere.py Fix — use tool_call_schema for tool conversion
chat_models/providers/generic.py Fix — _get_tool_parameters, turn-scoped tool counting
chat_models/oci_generative_ai.py Fix — tool_choice="required" for structured output
common/auth.py New — configurable timeout via _resolve_timeout

Test plan

  • 16 unit tests passing (test_react.py)
  • 45 unit tests passing (test_tool_schema_constraints, test_tool_call_turn_scoping)
  • Integration tests passing (test_react_integration, test_tool_calling, test_response_format)
  • 60 integration tests passing against live ADB + OpenSearch (full stack)

Context

This is PR 1 of 13 in the deep research agent stack (includes structured output fix from #190). See the comment below for the full breakdown.

@oracle-contributor-agreement oracle-contributor-agreement Bot added the OCA Verified All contributors have signed the Oracle Contributor Agreement. label Feb 26, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 2 times, most recently from d6bfd4b to 7bbdbd1 Compare February 28, 2026 12:14
@fede-kamel fede-kamel changed the title Add deep agent support with deepagents integration Refactor datastore tools with improved architecture Feb 28, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 3 times, most recently from 5be2a7c to 99f0787 Compare February 28, 2026 21:20
@fede-kamel fede-kamel changed the title Refactor datastore tools with improved architecture Add deep research agent with datastore support Feb 28, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 7 times, most recently from 021f445 to 386f59a Compare February 28, 2026 23:11
@fede-kamel fede-kamel changed the title Add deep research agent with datastore support (feat): Implement deepresearch agent with datastore support Feb 28, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 2 times, most recently from acc38fe to 10e9bf7 Compare February 28, 2026 23:59
@fede-kamel fede-kamel marked this pull request as ready for review March 1, 2026 00:00
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch from 10e9bf7 to 9405738 Compare March 1, 2026 00:06
@fede-kamel fede-kamel changed the title (feat): Implement deepresearch agent with datastore support (feat): Implement deepresearch agent with ADB and Opensearch support Mar 1, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 3 times, most recently from 646255e to c74d23d Compare March 1, 2026 22:25
@fede-kamel fede-kamel changed the title (feat): Implement deepresearch agent with ADB and Opensearch support Deep Research: OracleVS path, chunked ADB ingestion, and integration hardening Mar 3, 2026
@fede-kamel fede-kamel changed the title Deep Research: OracleVS path, chunked ADB ingestion, and integration hardening Add Deep Research agent: datastore tools (ADB/OpenSearch), examples, and test suite Mar 3, 2026
@fede-kamel fede-kamel changed the title Add Deep Research agent: datastore tools (ADB/OpenSearch), examples, and test suite feat: Add Deep Research agent with datastore tools (ADB/OpenSearch) Mar 3, 2026
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 3 times, most recently from 1a23643 to d3801ee Compare March 4, 2026 03:11
@fede-kamel fede-kamel force-pushed the feature/deep-agent-support branch 9 times, most recently from 66428e4 to 3a476bf Compare March 25, 2026 20:55
from pydantic import BaseModel

from langchain_oci.chat_models.providers.base import Provider
from langchain_oci.chat_models.providers.generic import GenericProvider
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need generic provider in cohere?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done — removed the GenericProvider import from cohere.py entirely. The only thing it was used for was _overlay_schema_extras, which is now on the base Provider class (see next comment).

schema = tool_call_schema.model_json_schema()
# Overlay json_schema_extra constraints from args_schema
if tool.args_schema:
GenericProvider._overlay_schema_extras(schema, tool.args_schema)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If only for this function, might think about convert it to a method in base provider or a function in helper?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved _overlay_schema_extras to the base Provider class as a @staticmethod. Both GenericProvider and CohereProvider now inherit it — cohere calls self._overlay_schema_extras() instead of GenericProvider._overlay_schema_extras().

Comment thread libs/oci/langchain_oci/agents/common.py Outdated


@dataclass(frozen=True)
class OCIConfig:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this OCIConfig? For me it is too generic for OCI and probably don't fit in the scope of agents.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — removed OCIConfig entirely. Replaced it with a Pydantic AgentConfig(BaseModel) that handles OCI env-var resolution via @model_validator. Both create_oci_agent and create_deep_research_agent now share this config + a _build_llm(config) helper in common.py, eliminating the duplicated LLM creation logic.

Comment thread libs/oci/langchain_oci/agents/common.py Outdated
return {k: v for k, v in kwargs.items() if v is not None}


def merge_model_kwargs(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember we have the same logic in both provider already.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consolidated into a single _build_llm(config) function in common.py that both agents call. No more duplicated OCI config resolution or ChatOCIGenAI instantiation.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need the datastores folder. We can move the tools and vectorstores to the top level. Those should be generic to not only agent use cases.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done — moved datastores to langchain_oci.datastores/ (top-level, not under agents). Removed the re-exports from agents/__init__.py and deep_research/__init__.py. Import path is now from langchain_oci.datastores import OpenSearch, ADB, create_datastore_tools.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are several refactor we can do. There are certainly duplication in both react agent and deep research agent. Can we consolidate those into an abstraction and reuse code as much as possible?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consolidated. Both agents now share:

  • AgentConfig(BaseModel) — base Pydantic config with OCI params + env-var resolution
  • _build_llm(config) — shared ChatOCIGenAI builder
  • _get_agent_factory() — shared langchain/langgraph version detection
  • _filter_none() — shared utility

DeepResearchConfig extends AgentConfig with datastore routing and deepagents-specific fields. The deep research agent is split into _create_from_config, _build_lightweight, and _build_deep for clarity.

) from None


def create_deep_research_agent(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a function that requires this much input arguments, it might be better to define a class or protocol with dataclass or Pydantic model for better validation.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done — introduced DeepResearchConfig(AgentConfig) as a Pydantic BaseModel. All the arguments are now typed fields with defaults and validation. The function signature still accepts kwargs for convenience, but internally it builds a DeepResearchConfig immediately.

# Datastores - if provided, auto-routing search is enabled
datastores: Optional[Dict[str, VectorDataStore]] = None,
default_datastore: Optional[str] = None,
default_store: Optional[str] = None, # Alias for default_datastore
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For aliasing, do not introduce a new function arg. For dataclass or Pydantic model, there is a more standard way to define it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — using standard Pydantic Field(alias=...) on the model:

default_datastore: Optional[str] = Field(default=None, alias="default_store")

With model_config = {"populate_by_name": True}, both default_datastore and default_store work as field names.

embedding_model: Any = None,
top_k: int = 5,
# OCI-specific options
model_id: str = "google.gemini-2.5-pro",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially wrap those into a single Runnable or LLMBase? All langchain concepts is building around it

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both create_oci_agent and create_deep_research_agent return CompiledStateGraph, which is already a LangChain Runnable — supports .invoke(), .ainvoke(), .stream(), .astream(), and pipes. The config models (AgentConfig, DeepResearchConfig) handle all the setup; the returned object is a clean Runnable ready to use.

Introduce AgentConfig base class and shared LLM builder (agents/common.py)
as foundation for all OCI agent types. Refactor react agent from single
file (react.py) to module (react/) using the shared config.

Add _compat.py with langgraph schema compatibility patches for upstream
OmitFromSchema/NotRequired annotation bugs.
@fede-kamel
Copy link
Copy Markdown
Contributor Author

@YouNeedCryDear — Per your feedback on PR size, I've split this into a stack of smaller, focused PRs (~500 lines each). Here's the breakdown:

Independent fixes (can merge in any order):

This PR (#148) is now trimmed to just the foundation:

  • Agent shared config (AgentConfig, _build_llm) + react agent module refactor (~475 lines)

Remaining stack (merge in order after #148):

All 76 unit tests and 58 integration tests pass against live ADB and OpenSearch. Each PR in the stack includes all predecessor code so CI can run independently.

Happy to start reviews from this PR whenever you're ready.

@fede-kamel fede-kamel changed the title feat: Add Deep Research agent with datastore tools (ADB/OpenSearch) feat: Add agent shared config and refactor react agent to module Apr 15, 2026
Fix tool schema conversion to use tool_call_schema (model-facing) instead
of args_schema to avoid serialization failures with injected runtime
parameters. Add supports_tool_choice to GenericProvider so
with_structured_output forces tool use. Fix multi-turn tool call counting
to scope to current turn only, preventing stale counts from blocking
legitimate tool use. Add configurable request timeout via auth.py.

Migrate response_format tests from unit to integration and add coverage
for tool_choice, empty-description schemas, and turn-scoped counting.
Comment thread libs/oci/langchain_oci/_compat.py Outdated
def _needs_patch() -> bool:
"""Check if the current langgraph version needs the schema patch."""
try:
from langgraph._internal import _pydantic as langgraph_pydantic
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it because of the old langchain version we need to support or it is just different API for different langgraph version?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neither really — it's a workaround for an upstream pydantic bug where OmitFromSchema + NotRequired (used by deepagents' tool schemas) makes langgraph's create_model blow up. So the patch isn't tied to supporting older langchain, it's compensating for a specific pydantic/langgraph interaction that deepagents schemas happen to hit. I added an upper-bound version check so the patch will auto-retire once upstream fixes it, rather than applying forever.

Comment thread libs/oci/langchain_oci/_compat.py Outdated
# since `from X import create_model` copies the reference.
import sys

for mod_name in ("langgraph.pregel.main", "langgraph.pregel"):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't feel like a safe way. Is it possible to condition on the langgraph version and only create alias for the create_model function?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point, it was way too silent. Gated it on a langgraph version check now, and if a module that imported create_model has a reference that's drifted from what we're patching, it'll log a warning instead of just quietly skipping. That way if langgraph moves stuff around internally we'll actually notice rather than have the patch silently become a no-op.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You were right to push on this — the global import-time patch was the wrong shape. Rewrote it.

_compat.py is gone. In its place there's a _langgraph_schema_fallback context manager in agents/common.py, and the agent factories (create_oci_agent here on #148, create_deep_research_agent on #197) wrap their create_agent_func(...) / create_deep_agent(...) call with it:

with _langgraph_schema_fallback():
    return create_agent_func(**agent_kwargs)

The patch is only live for the duration of that call and restored cleanly on exit. No side effects on import, nothing to version-gate, nothing to remove from __init__.py. If someone uses this package without ever calling one of our agent factories, langgraph internals are never touched. The exception catch is narrowed to real types (PydanticForbiddenQualifier imported at module load, with TypeError as a fallback).

Also confirmed no behavioral regression — 347 unit tests still green across the stack, and the integration tests I ran against real OCI (Meta Llama, Cohere, Gemini, Grok, OpenAI) still pass.

Comment thread libs/oci/langchain_oci/_compat.py Outdated
field_definitions=field_definitions,
root=root,
)
except Exception as ex:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we narrow down the exception here?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, the class-name string match was pretty janky. Switched it to import PydanticForbiddenQualifier directly (with a fallback to TypeError for older pydantic versions that don't expose it at the top level), and the except clause now catches those types properly.

Comment thread libs/oci/langchain_oci/agents/common.py Outdated
merged_kwargs["temperature"] = config.temperature
if config.max_tokens is not None:
if config.model_id and config.model_id.startswith("openai."):
merged_kwargs["max_completion_tokens"] = config.max_tokens
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it is better to change it on the provider level?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you're right — that branch had no business being in the agent layer. Moved it to a new OpenAIProvider subclass that does the remap in normalize_params, same pattern GeminiProvider already uses for max_output_tokens → max_tokens. agents/common.py just passes max_tokens through regardless of provider now. Also dropped the ad-hoc warning in _prepare_request since it's redundant with the silent normalize.

...

@staticmethod
def _overlay_schema_extras(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably better be a function in util?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it belongs next to resolve_schema_refs and sanitize_schema in OCIUtils. I left it as a static method on Provider in this PR just to keep the diff focused on the bigger items you flagged — happy to do the move as a quick follow-up if you'd rather not bundle it in here.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did it in this PR rather than deferring — moved to OCIUtils.overlay_schema_extras alongside resolve_schema_refs / sanitize_schema. Callers in GenericProvider and CohereProvider now go through OCIUtils.overlay_schema_extras(...).

# model returns structured output instead of free-form text.
bind_kwargs: Dict[str, Any] = {**kwargs}
if self._provider.supports_tool_choice:
bind_kwargs["tool_choice"] = "required"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have this been validated to models other than openai?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good flag — I ran the full matrix to be sure this doesn't regress anyone. OpenAI (gpt-4.1, gpt-5.1 / 5.2 / 5.4 + mini + nano), Meta Llama (3.3-70b, 4-scout, 4-maverick), Gemini 2.5 (flash, flash-lite, pro), Grok (grok-4-fast, grok-3-mini), and Cohere (command-a, command-r-plus) — all pass across test_structured_output_no_docstring, _with_enum, and _nested.

I also added a small escape hatch: if a caller passes an explicit tool_choice in kwargs, we honor it instead of overriding to "required". Just in case a future model handles "required" poorly and someone wants to opt out without having to work around the library.

# Override via OCI_REQUEST_TIMEOUT env var (read timeout only)
# or pass timeout= to create_oci_client_kwargs.
_DEFAULT_CONNECT_TIMEOUT = 10
_DEFAULT_READ_TIMEOUT = 240
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is default timeout defined in for both OCI GenAI API call and general OCI API call. Any reason not using those?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SDK default is (10, 60) — the 240s read timeout came from hitting actual timeouts on long reasoning responses (gpt-5 / o3-style) and streaming completions that legitimately take over a minute. Probably should've had a comment explaining that from the start; I added one now. Users can still dial it back to the SDK default via OCI_REQUEST_TIMEOUT or the timeout= kwarg if they want tighter bounds.

schema = OCIUtils.sanitize_schema(schema)
properties = schema.get("properties", {})
if properties is None:
as_json_schema_function = convert_to_openai_function(tool)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any difference between how cohere expect the function vs openai?

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No functional difference where it actually matters — the fallback only reads parameters.properties and parameters.required, which are plain JSON Schema and the same across both providers. The OCI-specific Cohere shape gets built right below via self.oci_tool_param(...), so whatever format convert_to_openai_function hands us, we only extract the JSON-Schema subset out of it.

That said, you're right that the variable name as_json_schema_function = convert_to_openai_function(tool) reads like we're forwarding OpenAI format straight to Cohere, which isn't what's happening. Happy to rename it to fallback_tool_schema or similar for clarity as a quick follow-up.

Copy link
Copy Markdown
Contributor Author

@fede-kamel fede-kamel Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went and actually verified it rather than just reasoning about it. Ran the same tool through both paths:

@tool
def weather(city: str, units: str = "fahrenheit") -> str:
    """Get weather. Args: city: City name. units: Temperature units."""

# Primary path (cohere.py:621-630):
primary = weather.tool_call_schema.model_json_schema()
primary = OCIUtils.resolve_schema_refs(primary)
primary = OCIUtils.resolve_anyof(primary)
primary = OCIUtils.sanitize_schema(primary)
# -> properties: {'city': {'type':'string'}, 'units': {'type':'string', 'default':'fahrenheit'}}
# -> required:   {'city'}

# Fallback path (cohere.py:631-635):
fallback = convert_to_openai_function(weather)['parameters']
# -> properties: {'city': {'type':'string'}, 'units': {'type':'string', 'default':'fahrenheit'}}
# -> required:   ['city']

What oci_tool_param(...) actually consumes from that — the {name, type, is_required} tuple for each parameter — is identical between the two paths:

primary:  {'city': {'type': 'string', 'required': True}, 'units': {'type': 'string', 'required': False}}
fallback: {'city': {'type': 'string', 'required': True}, 'units': {'type': 'string', 'required': False}}

Also did an end-to-end call against cohere.command-a-03-2025 to be thorough:

tool_calls: [{'name': 'get_weather', 'args': {'city': 'Paris', 'units': 'metric'}, ...}]

Cohere picked up the tool correctly.

One thing worth mentioning: the fallback almost never fires in practice. Both @tool-decorated functions (which become StructuredTool) and plain Tool instances come out of langchain_core with tool_call_schema already set and having model_json_schema, so the fallback is really defensive code for legacy / custom BaseTool subclasses that leave it unset. But even in that edge case the two paths converge on the same JSON-Schema subset, so no Cohere/OpenAI format divergence. Still going to rename as_json_schema_function in the follow-up for clarity.

- Add OpenAIProvider to normalize max_tokens -> max_completion_tokens at
  the provider layer; remove leaky openai.-prefix branch in agents/common
  and the ad-hoc warning in _prepare_request
- Version-gate _compat.py langgraph patch via _LANGGRAPH_MAX_PATCHED_VERSION
  so it self-expires, and log a warning when sys.modules rebind target
  differs from the original create_model reference
- Narrow _compat.py exception catch: import pydantic's ForbiddenQualifier
  at module load and use real types instead of class-name string matching
- Respect caller-supplied tool_choice in with_structured_output so users
  can opt out of "required" forcing on providers that handle it poorly
- Document the 240s read timeout override in common/auth.py (SDK default
  is 60s, LLM streaming legitimately exceeds that)
Collapse the tool_choice condition onto one line to satisfy ruff format.
@fede-kamel
Copy link
Copy Markdown
Contributor Author

fede-kamel commented Apr 22, 2026

@YouNeedCryDear I pushed 9924f83 + 227734b addressing your comments — CI's green. Quick rundown of what changed:

  • _compat.py — version-gated the langgraph monkey patch so it auto-retires once the upstream pydantic bug gets fixed, and it now logs a warning if the sys.modules rebind target has drifted (previously it silently did nothing). Also switched the except Exception + class-name match to a real except (PydanticForbiddenQualifier, TypeError).
  • providers/generic.py + friends — added an OpenAIProvider(GenericProvider) that does the max_tokens → max_completion_tokens remap in normalize_params, same pattern as GeminiProvider. The openai.-prefix branch is gone from agents/common.py and the ad-hoc warning is gone from _prepare_request.
  • oci_generative_ai.pywith_structured_output now honors a caller-supplied tool_choice in kwargs rather than always overriding with "required", so there's an escape hatch if a model handles it poorly.
  • common/auth.py — documented the 240s read-timeout override (the SDK default is 60s, which isn't enough for long reasoning / streaming).

Replied inline on the other four — the _overlay_schema_extras move and the cohere variable rename I left as small follow-ups to keep this PR focused.

On the tool_choice validation question I ran the full structured-output matrix across providers:

Family Models Result
OpenAI gpt-4.1, gpt-5.1, gpt-5.2, gpt-5.4, gpt-5.4-mini, gpt-5.4-nano, gpt-oss-20b, gpt-oss-120b
Meta Llama 3.3-70b, 4-scout-17b, 4-maverick-17b
Gemini 2.5-flash, 2.5-flash-lite, 2.5-pro
Grok grok-4-1-fast-non-reasoning, grok-3-mini-fast
Cohere command-a-03-2025, command-r-plus-08-2024

287 unit tests + 167 integration tests passing, ruff + mypy clean, no regressions.

fede-kamel added a commit to fede-kamel/langchain-oracle that referenced this pull request Apr 22, 2026
After the oracle#148 refactor, OpenAIProvider.normalize_params remaps max_tokens
to max_completion_tokens at request-prep time. The agent layer now stays
model-agnostic and passes max_tokens through unchanged, so the test now
asserts that behavior instead of the old agents/common.py remap.
- Move overlay_schema_extras from Provider base class to OCIUtils,
  next to resolve_schema_refs / sanitize_schema where it belongs, and
  update the two call sites in CohereProvider and GenericProvider.
- Delete the unused langchain_oci/_compat.py module. The langgraph
  monkey-patch and its helpers had no callers in any branch. If the
  upstream pydantic/langgraph schema bug resurfaces, a targeted
  call-site fix is safer than patching internal modules on import.
fede-kamel added a commit to fede-kamel/langchain-oracle that referenced this pull request Apr 22, 2026
After the oracle#148 refactor, OpenAIProvider.normalize_params remaps max_tokens
to max_completion_tokens at request-prep time. The agent layer now stays
model-agnostic and passes max_tokens through unchanged, so the test now
asserts that behavior instead of the old agents/common.py remap.
Replace the module-level _compat monkey patch with a contextmanager
(_langgraph_schema_fallback) in agents/common.py that's active only
around the create_agent call. Wrap create_oci_agent's factory call
with it.

This addresses the review concern that reaching into
langgraph._internal._pydantic on package import was fragile and
global. Now the patch is scoped to the specific call that can trigger
the pydantic bug (OmitFromSchema + NotRequired on deepagents'
runtime-injected tool fields), catches real exception types instead
of class-name strings, and cleanly restores the original bindings on
exit. Exception-type imports (PydanticForbiddenQualifier) resolved at
module load with a TypeError fallback for older pydantic.
fede-kamel added a commit to fede-kamel/langchain-oracle that referenced this pull request Apr 22, 2026
After the oracle#148 refactor, OpenAIProvider.normalize_params remaps max_tokens
to max_completion_tokens at request-prep time. The agent layer now stays
model-agnostic and passes max_tokens through unchanged, so the test now
asserts that behavior instead of the old agents/common.py remap.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

OCA Verified All contributors have signed the Oracle Contributor Agreement.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants