This commit introduces support for Ollama as an alternative Large Language Model (LLM) provider and enhances PDF image extraction capabilities.
- **Ollama Integration:**
- Implemented `set_ollama_config` to configure Ollama's base URL from `config.ini`.
- Modified `llm.py` to dynamically select and configure the LLM (Gemini or Ollama) based on the `PROVIDER` setting.
- Updated `get_model_name` to return provider-specific default model names.
- `pdf_convertor.py` now conditionally initializes `ChatGoogleGenerativeAI` or `ChatOllama` based on the configured provider.
- **PyMuPDF Image Extraction:**
- Added a new `extract_images_from_pdf` function using PyMuPDF (`fitz`) for direct image extraction from PDF files.
- Introduced `get_extract_images_from_pdf_flag` to control this feature via `config.ini`.
- `convert_pdf_to_markdown` and `refine_content` functions were updated to utilize this new image extraction method when enabled.
- **Refinement Flow:**
- Adjusted the order of `save_md_images` in `main.py` and added an option to save the refined markdown with a specific filename (`index_refined.md`).
- **Dependencies:**
- Updated `pyproject.lock` to include new dependencies for Ollama integration (`langchain-ollama`) and PyMuPDF (`PyMuPDF`), along with platform-specific markers for NVIDIA dependencies.
23 lines
545 B
JSON
23 lines
545 B
JSON
{
|
|
"configurations": [
|
|
{
|
|
"name": "refine",
|
|
"type": "debugpy",
|
|
"request": "launch",
|
|
"program": "refine.py",
|
|
"console": "integratedTerminal",
|
|
"args": [
|
|
"--md-path",
|
|
"output/13/index.md"
|
|
]
|
|
},
|
|
{
|
|
"name": "main",
|
|
"type": "debugpy",
|
|
"request": "launch",
|
|
"program": "main.py",
|
|
"console": "integratedTerminal",
|
|
"args": []
|
|
}
|
|
]
|
|
} |