diff --git a/README.md b/README.md index 43f01b0..e852c50 100644 --- a/README.md +++ b/README.md @@ -1,11 +1 @@ -# Open LLaMA - -Run [OpenLLaMA](https://github.com/openlm-research/open_llama) in a GPU environment with a single command. 📡 - -## Speed Run - -1. Signup for [Beam](http://beam.cloud) -2. Download the CLI and Python SDK -3. Clone this template locally: `beam create-app openllama` -4. Spin up a GPU environment to run inference: `beam start app.py` -5. Deploy the app as a web API: `beam deploy app.py` \ No newline at end of file +# Flan-T5 HTTP API