Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions docs/examples_notebooks/api_overview.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"source": [
"## API Overview\n",
"\n",
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations. "
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations.\n"
]
},
{
Expand Down Expand Up @@ -48,16 +48,17 @@
"metadata": {},
"source": [
"## Prerequisite\n",
"\n",
"As a prerequisite to all API operations, a `GraphRagConfig` object is required. It is the primary means to control the behavior of graphrag and can be instantiated from a `settings.yaml` configuration file.\n",
"\n",
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file."
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate a `GraphRagConfig` object"
"### Generate a `GraphRagConfig` object\n"
]
},
{
Expand All @@ -77,14 +78,14 @@
"source": [
"## Indexing API\n",
"\n",
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build an index"
"## Build an index\n"
]
},
{
Expand All @@ -107,7 +108,7 @@
"source": [
"## Query an index\n",
"\n",
"To query an index, several index files must first be read into memory and passed to the query API. "
"To query an index, several index files must first be read into memory and passed to the query API.\n"
]
},
{
Expand Down Expand Up @@ -138,7 +139,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
]
},
{
Expand All @@ -154,7 +155,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
]
},
{
Expand Down
14 changes: 7 additions & 7 deletions docs/examples_notebooks/input_documents.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"\n",
"Newer versions of GraphRAG let you submit a dataframe directly instead of running through the input processing step. This notebook demonstrates with regular or update runs.\n",
"\n",
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index."
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index.\n"
]
},
{
Expand Down Expand Up @@ -54,7 +54,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate a `GraphRagConfig` object"
"### Generate a `GraphRagConfig` object\n"
]
},
{
Expand All @@ -72,14 +72,14 @@
"source": [
"## Indexing API\n",
"\n",
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build an index"
"## Build an index\n"
]
},
{
Expand Down Expand Up @@ -109,7 +109,7 @@
"source": [
"## Query an index\n",
"\n",
"To query an index, several index files must first be read into memory and passed to the query API. "
"To query an index, several index files must first be read into memory and passed to the query API.\n"
]
},
{
Expand Down Expand Up @@ -140,7 +140,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
]
},
{
Expand All @@ -156,7 +156,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
]
},
{
Expand Down
72 changes: 44 additions & 28 deletions docs/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,40 +10,59 @@ The following is a simple end-to-end example for using GraphRAG on the command l

It shows how to use the system to index some text, and then use the indexed data to answer questions about the documents.

# Install GraphRAG
## Install GraphRAG

To get started, create a project space and python virtual environment to install `graphrag`.

### Create Project Space

```bash
pip install graphrag
mkdir graphrag_quickstart
cd graphrag_quickstart
python -m venv .venv
```
### Activate Python Virtual Environment - Unix/MacOS

# Running the Indexer
We need to set up a data project and some initial configuration. First let's get a sample dataset ready:
```bash
source .venv/bin/activate
```

```sh
mkdir -p ./christmas/input
### Activate Python Virtual Environment - Windows

```bash
.venv\Scripts\activate
```
Comment thread
dworthen marked this conversation as resolved.

Get a copy of A Christmas Carol by Charles Dickens from a trusted source:
### Install GraphRAG

```sh
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./christmas/input/book.txt
```bash
python -m pip install graphrag
```

## Set Up Your Workspace Variables
### Initialize GraphRAG

To initialize your workspace, first run the `graphrag init` command.
Since we have already configured a directory named `./christmas` in the previous step, run the following command:

```sh
graphrag init --root ./christmas
graphrag init
```

This will create two files: `.env` and `settings.yaml` in the `./christmas` directory.
This will create two files, `.env` and `settings.yaml`, and a directory `input`, in the current directory.

- `input` Location of text files to process with `graphrag`.
- `.env` contains the environment variables required to run the GraphRAG pipeline. If you inspect the file, you'll see a single environment variable defined,
`GRAPHRAG_API_KEY=<API_KEY>`. Replace `<API_KEY>` with your own OpenAI or Azure API key.
- `settings.yaml` contains the settings for the pipeline. You can modify this file to change the settings for the pipeline.
<br/>

### Download Sample Text

Get a copy of A Christmas Carol by Charles Dickens from a trusted source:

```sh
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./input/book.txt
```

## Set Up Workspace Variables

### Using OpenAI

Expand All @@ -56,13 +75,14 @@ In addition to setting your API key, Azure OpenAI users should set the variables
```yaml
type: chat
model_provider: azure
model: gpt-4.1
deployment_name: <AZURE_DEPLOYMENT_NAME>
api_base: https://<instance>.openai.azure.com
api_version: 2024-02-15-preview # You can customize this for other versions
```

Most people tend to name their deployments the same as their model - if yours are different, add the `deployment_name` as well.

#### Using Managed Auth on Azure

To use managed auth, edit the auth_type in your model config and *remove* the api_key line:

```yaml
Expand All @@ -71,38 +91,34 @@ auth_type: azure_managed_identity # Default auth_type is is api_key

You will also need to login with [az login](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli) and select the subscription with your endpoint.

## Running the Indexing pipeline
## Index

Now we're ready to run the pipeline!
Now we're ready to index!

```sh
graphrag index --root ./christmas
graphrag index
```

![pipeline executing from the CLI](img/pipeline-running.png)

This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./christmas/output` with a series of parquet files.
This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./output` with a series of parquet files.

# Using the Query Engine
# Query

Now let's ask some questions using this dataset.

Here is an example using Global search to ask a high-level question:

```sh
graphrag query \
--root ./christmas \
--method global \
--query "What are the top themes in this story?"
graphrag query "What are the top themes in this story?"
```

Here is an example using Local search to ask a more specific question about a particular character:

```sh
graphrag query \
--root ./christmas \
--method local \
--query "Who is Scrooge and what are his main relationships?"
"Who is Scrooge and what are his main relationships?" \
--method local
```

Please refer to [Query Engine](query/overview.md) docs for detailed information about how to leverage our Local and Global search mechanisms for extracting meaningful insights from data after the Indexer has wrapped up execution.
Expand Down
2 changes: 1 addition & 1 deletion docs/index/byog.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ Putting it all together:

- `output`: Create an output folder and put your entities and relationships (and optionally text_units) parquet files in it.
- Update your config as noted above to only run the workflows subset you need.
- Run `graphrag index --root <your project root>`
- Run `graphrag index --root <your_project_root>`
10 changes: 4 additions & 6 deletions docs/prompt_tuning/auto_prompt_tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,16 +20,14 @@ Before running auto tuning, ensure you have already initialized your workspace w
You can run the main script from the command line with various options:

```bash
graphrag prompt-tune [--root ROOT] [--config CONFIG] [--domain DOMAIN] [--selection-method METHOD] [--limit LIMIT] [--language LANGUAGE] \
graphrag prompt-tune [--root ROOT] [--domain DOMAIN] [--selection-method METHOD] [--limit LIMIT] [--language LANGUAGE] \
[--max-tokens MAX_TOKENS] [--chunk-size CHUNK_SIZE] [--n-subset-max N_SUBSET_MAX] [--k K] \
[--min-examples-required MIN_EXAMPLES_REQUIRED] [--discover-entity-types] [--output OUTPUT]
```

## Command-Line Options

- `--config` (required): The path to the configuration file. This is required to load the data and model settings.

- `--root` (optional): The data project root directory, including the config files (YML, JSON, or .env). Defaults to the current directory.
- `--root` (optional): Path to the project directory that contains the config file (settings.yaml). Defaults to the current directory.

- `--domain` (optional): The domain related to your input data, such as 'space science', 'microbiology', or 'environmental news'. If left empty, the domain will be inferred from the input data.

Expand All @@ -56,15 +54,15 @@ graphrag prompt-tune [--root ROOT] [--config CONFIG] [--domain DOMAIN] [--selec
## Example Usage

```bash
python -m graphrag prompt-tune --root /path/to/project --config /path/to/settings.yaml --domain "environmental news" \
python -m graphrag prompt-tune --root /path/to/project --domain "environmental news" \
--selection-method random --limit 10 --language English --max-tokens 2048 --chunk-size 256 --min-examples-required 3 \
--no-discover-entity-types --output /path/to/output
```

or, with minimal configuration (suggested):

```bash
python -m graphrag prompt-tune --root /path/to/project --config /path/to/settings.yaml --no-discover-entity-types
python -m graphrag prompt-tune --root /path/to/project --no-discover-entity-types
```

## Document Selection Methods
Expand Down
68 changes: 68 additions & 0 deletions packages/graphrag-common/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,72 @@ single2 = factory.create("some_other_strategy", {"value": "ignored"})
assert single1 is single2
assert single1.get_value() == "singleton"
assert single2.get_value() == "singleton"
```

## Config module

```python
from pydantic import BaseModel, Field
from graphrag_common.config import load_config

from pathlib import Path

class Logging(BaseModel):
"""Test nested model."""

directory: str = Field(default="output/logs")
filename: str = Field(default="logs.txt")

class Config(BaseModel):
"""Test configuration model."""

name: str = Field(description="Name field.")
logging: Logging = Field(description="Nested model field.")

# Basic - by default:
# - searches for Path.cwd() / settings.[yaml|yml|json]
# - sets the CWD to the directory containing the config file.
# so if no custom config path is provided than CWD remains unchanged.
# - loads config_directory/.env file
# - parses ${env} in the config file
config = load_config(Config)

# Custom file location
config = load_config(Config, "path_to_config_filename_or_directory_containing_settings.[yaml|yml|json]")

# Using a custom file extension with
# custom config parser (str) -> dict[str, Any]
config = load_config(
config_initializer=Config,
config_path="config.toml",
config_parser=lambda contents: toml.loads(contents) # Needs toml pypi package
)

# With overrides - provided values override whats in the config file
# Only overrides what is specified - recursively merges settings.
config = load_config(
config_initializer=Config,
overrides={
"name": "some name",
"logging": {
"filename": "my_logs.txt"
}
},
)

# By default, sets CWD to directory containing config file
# So custom config paths will change the CWD.
config = load_config(
config_initializer=Config,
config_path="some/path/to/config.yaml",
set_cwd=True # default
)

# now cwd == some/path/to
assert Path.cwd() == "some/path/to"

# And now throughout the codebase resolving relative paths in config
# will resolve relative to the config directory
Path(config.logging.directory) == "some/path/to/output/logs"

```
Loading