Auto-GPT is an experimental, open-source AI agent that uses OpenAI's GPT-4 or GPT-3.5 APIs to perform autonomous tasks. Auto-GPT is among the first examples of an application using GPT-4.

Auto-GPT is given a goal in natural language, and then it breaks it down into sub-tasks and uses the internet and other tools to achieve it. It automates the multi-step prompting process typically required to operate a chatbot such as ChatGPT.

Auto-GPT is a free open-source simulation tool. However, there is a cost to using the tokens because it uses GPT4.

Auto-GPT is different from Agent GPT in that Auto-GPT operates independently and generates its own prompts, while Agent GPT requires human interaction and depends on user inputs.

For Project Eden, AutoGPT's act as extensions of the localized system that can be assigned to accomplish difficult tasks that require complex approaches.  AutoGPT systems can use plugins to accompish tasks, and even autonomously create subordant instances of AutoGPT systems, and have the ability to dynamically distribute tasks throughout clusters of AutoGPT systems.  Eden utilizes a 'Autogpt Stacking' strategy to optimize this performance attribute, allowing autogpt systems to be able to edit base code of subordant autogpt systems.  This process also allows autogptsystems to communicate across a cluster of autogpt systems by updating shared files with information that master autogpt systems can then read and write to. 

AutoGPT Comprehensive User Guide

AutoGPT is a powerful AI-based development environment that equips developers with the tools they need to create, test, and deploy software solutions. It leverages the capabilities of GPT (Generative Pretrained Transformer) models for a variety of tasks including text generation, summarization, translation, and more. AutoGPT is an open-source AI tool built upon OpenAI's ChatGPT but with more functionalities. It is designed to be an autonomous AI assistant available to everyone, capable of accomplishing tasks without needing new prompts for every single action. In addition to the functionalities of ChatGPT, AutoGPT can access the web, run Google searches, create text files, use other plugins, and run many tasks back to back. These features are designed to make it more autonomous and efficient in handling tasks​1​.

Key Benefits

Plugins Overview

Each plugin in AutoGPT has unique features designed to enhance your development experience. Here are some notable ones:

Setting Up AutoGPT

Configuring AutoGPT

Testing AutoGPT

Deploying AutoGPT

Working with Plugins

Best Practices and Troubleshooting

Remember, practice makes perfect. The more you use AutoGPT, the more familiar you'll become with its functionalities and features. Happy coding!

Once AutoGPT is installed, it can be further extended by installing plugins. These plugins can be downloaded from the root of the AutoGPT directory. They can be installed on Linux or MacOS using the curl command, or on Windows using PowerShell. After downloading, you need to execute the dependency install script for the plugins. This can be done either directly via the CLI or through provided scripts for Linux/MacOS and Windows. After installing the plugins, you can set ALLOWLISTED_PLUGINS in your .env file to allow specific plugins to be used without interactions​2​.

Integrating AutoGPT into

To integrate AutoGPT within, it needs to be installed inside Leon's working directory in a folder named Auto-GPT. This allows Leon to use Visual Studio Plus to work directly on files within these directories.

Here's how you can set up this integration:

Now, can access and work on the AutoGPT files directly from Visual Studio Plus, leveraging its robust features for creating, testing, and deploying software solutions. This setup enhances Leon's capabilities, allowing it to utilize the powerful features of AutoGPT for various tasks.

How to Use Once Installed

Once Auto-GPT is installed, you can interact with it using a command-line interface. Here's how you use it:

- You can see a list of all the possible command line arguments by running `--help` with the run script:


  ./ --help     # on Linux / macOS

  .\run.bat --help    # on Windows


  For use with Docker, replace the script in the examples with `docker-compose run --rm auto-gpt`.

- Auto-GPT can be run with different AI Settings or Prompt Settings files using the following commands respectively:


  ./ --ai-settings <filename>

  ./ --prompt-settings <filename>


  Replace `<filename>` with the name of the file you want to use【102†source】.

- Auto-GPT also supports a Speak Mode, where it uses Text-to-Speech (TTS) for output. This mode can be activated using:


  ./ --speak


  This command will enable Text-to-Speech for Auto-GPT【103†source】.

- There's a Continuous Mode, which allows Auto-GPT to run without user authorization. This mode is potentially dangerous and may cause your AI to run indefinitely or carry out actions you would not normally authorize:


  ./ --continuous


  Use this mode at your own risk【104†source】.

- Auto-GPT also has a Self-Feedback Mode, which allows the AI to provide self-feedback by verifying its own actions and checking if they align with its current goals. This mode can be enabled by inputting `S` into the input field. However, this mode will increase token use and thus cost more【105†source】.

- If you don't have access to GPT-4, you can run Auto-GPT in GPT-3.5 ONLY Mode with the command:


  ./ --gpt3only


  You can also achieve the same by setting `SMART_LLM_MODEL` in your `.env` file to `gpt-3.5-turbo`【106†source】.

- If you have access to GPT-4, you can run Auto-GPT in GPT-4 ONLY Mode with the command:


  ./ --gpt4only


  Please note that GPT-4 is more expensive to use, so running Auto-GPT in GPT-4-only mode will increase your API costs【107†source】.

- You can view activity and error logs in the `./output/logs` directory. To print out debug logs, use the `--debug` argument:


  ./ --debug


  This command will output debug logs【108†source】.

- If you want to selectively disable some command groups, you can use the `DISABLED_COMMAND_CATEGORIES` config in your `.env` file. For example, to disable coding related features, set it to the value below:




  This will disable coding related features in Auto-GPT【109†source】.

Remember, Auto-GPT is a powerful tool, and with great power comes great responsibility. Always use it wisely and ensure you understand the consequences of the commands you are running.

Auto-GPT Technical User Guide: Post-Installation Configuration & Best Practices

1. Configuration Post-Installation

2. Running Modes

3. Using Auto-GPT

4. Recommendations & Best Practices

By understanding and following this guide, users can harness the full potential of Auto-GPT safely and efficiently.

et's set up the configuration for the Primary and Secondary Auto-GPTs.

Primary Auto-GPT Configuration:

For the Primary Auto-GPT, we want it to act as a controller for the Secondary Auto-GPT. Its primary responsibility is to interact with the Secondary Auto-GPT by providing it with tasks and reading its output.

Secondary Auto-GPT Configuration:

The Secondary Auto-GPT will do the actual work, based on the instructions provided by the Primary Auto-GPT.

Instructions for Primary Auto-GPT:

When prompted with "I want Auto-GPT to:", the Primary Auto-GPT should be instructed:


Copy code

I want Auto-GPT to act as an interface and controller for the Secondary Auto-GPT. It should read and interpret tasks, relay them as specific instructions to the Secondary Auto-GPT, and read the output from the Secondary Auto-GPT to provide responses.

This instruction aligns with the roles and responsibilities you've outlined for the Primary Auto-GPT.

Instructions for Secondary Auto-GPT:

When prompted with "I want Auto-GPT to:", the Secondary Auto-GPT should be instructed:


Copy code

I want Auto-GPT to interpret and execute tasks based on the instructions provided by the Primary Auto-GPT. It should be capable of fetching information, interacting with online and local apps, and completing a wide range of tasks.

This sets the groundwork for the Secondary Auto-GPT to perform the heavy lifting in terms of task execution.

By setting up the two Auto-GPTs in this manner, you create a clear distinction between their roles, ensuring efficient collaboration between them.

Primary AutoGPT Agent:

Ai Name: Primary Controller

Ai Purpose: To act as an expert of AutoGPT [ ], and a control system for the Secondary AutoGPT agent. Handle and interpret input from Leon.AI, the user, or output from the Secondary AutoGPT agent, and direct the Secondary agent using programmatic control through its terminal session. If the agent can't find the console for the second autogpt agent, the ai may work with the user to try to find it.

5 Goals:

Secondary AutoGPT Agent:

Ai Name: Task Executor

Ai Purpose: To execute and perform tasks as directed by the Primary AutoGPT agent. Operate under programmatic control from the Primary Controller, executing tasks without independent decision-making.

5 Goals:

By explicitly detailing the programmatic control strategy in the prompts, the Primary AutoGPT agent will have a clearer roadmap for its interactions with the Secondary agent.

Dear Primary Agent, to interact with the secondary agent's terminal window (ID: 82074), consider using xdotool to programmatically send commands. Execute xdotool type --window 82074 "command" to type a command and xdotool key --window 82074 Return to simulate pressing 'Enter'. To capture the secondary agent's output, redirect its terminal output to a file and then read that file. Implement a loop where you send a command, wait for and read the response, then decide on the next actions. This approach is tailored for a Linux environment with X11. Always ensure safety by verifying commands to avoid unintended side effects or security risks, and be prepared to handle any errors that might arise during interactions for smooth communication with the secondary agent.