Added Llama 3.1 agent, Llama 3.1 response parser and initial module structure
Created by: jrzkaminski
Hello! Introducing a pre-alpha version of LLM agent code. The code initially was used for very narrow tasks, so further testing in different environments is essential. Also, code improvements, further generalization, contributing to the PR, suggestions and so on are very welcome.
The PR introduces two sub-modules:
- Agent wrappers sub-module that is responsible for context manipulation and agent interactions.
- Parsers play an important role in making LLM output executable.
Since all the models have different roles (e.g. tool/ipython, tool_call/assistant) and tokens (</tool_token> and so on), two choices emerge:
- Generalize the agent class further by creating customizable roles, tokens and endpoints.
- Create user-friendly classes for specific models and expand/deprecate the number of classes when new models are released.
Both options have advantages and disadvantages, both are debatable. I am unsure which is better in our case.
Here is an example of how current code might be used:
from protollm.agents import Llama31ResponseParser, Llama31Agent
from my_library.config import (
API_KEY,
BASE_URL,
MODEL,
TEMPERATURE,
MAX_TOKENS,
CUSTOM_SYSTEM_MESSAGE,
CUSTOM_USER_MESSAGE,
FUNCTIONS_METADATA,
)
def main():
response_parser = Llama31ResponseParser()
agent = Llama31Agent(
api_key=API_KEY,
base_url=BASE_URL,
model=MODEL,
tools_module="my_library.tools",
self.response_parser = response_parser
custom_system_message=CUSTOM_SYSTEM_MESSAGE,
custom_user_message=CUSTOM_USER_MESSAGE,
functions_metadata=FUNCTIONS_METADATA,
temperature=TEMPERATURE,
max_tokens=MAX_TOKENS,
)
user_input = (
"some query"
)
result = agent(user_input)
print("Final Result:", result)
if __name__ == "__main__":
main()