Skip to content

Conversation

@honggyukim
Copy link
Contributor

Purpose of the change

This PR is to fix memmachine-compose.sh for ollama in Linux environment.

Description

Since "host.docker.internal" is only available in docker desktop, it needs to be explicitly added to docker-compose.yml for Linux users.

Fixes/Closes

Fixes #918

Type of change

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

I have tested it by running memmachine-compose.sh with ollama environment in Linux.

  • Manual verification (list step-by-step instructions)

Test Results: [Attach logs, screenshots, or relevant output]

MemMachine is ready as follows.

$ ./memmachine-compose.sh
MemMachine Docker Startup Script
====================================

[SUCCESS] Docker and Docker Compose are available
[WARNING] .env file not found. Creating from template...
[SUCCESS] Created .env file from sample_configs/env.dockercompose
[WARNING] configuration.yml file not found. Creating from template...
[PROMPT] Which configuration would you like to use for the Docker Image? (CPU/GPU) [CPU]:
[INFO] CPU configuration selected.
[PROMPT] Which provider would you like to use? (OpenAI/Bedrock/Ollama/OpenAI-compatible) [OpenAI]: Ollama
[INFO] Selected provider: OLLAMA
[SUCCESS] Set MEMMACHINE_IMAGE to memmachine/memmachine:latest-cpu in .env file
[PROMPT] Which Ollama LLM model would you like to use? [llama3]:
[SUCCESS] Selected Ollama LLM model: llama3
[PROMPT] Which Ollama embedding model would you like to use? [nomic-embed-text]:
[SUCCESS] Selected Ollama embedding model: nomic-embed-text
[INFO] Generating configuration file for OLLAMA provider...
[SUCCESS] Generated configuration file with OLLAMA provider settings
[PROMPT] Ollama base URL [http://host.docker.internal:11434/v1]:
[SUCCESS] Set Ollama base URL: http://host.docker.internal:11434/v1
[SUCCESS] Ollama configuration detected with default base URL
[SUCCESS] API key in configuration.yml appears to be configured
[SUCCESS] Database credentials in configuration.yml appear to be configured
[INFO] Pulling and starting MemMachine services...
[INFO] Pulling latest images... (Target: memmachine/memmachine:latest-cpu)
    ...
[INFO] Waiting for MemMachine to be ready...
[SUCCESS] MemMachine is ready
[SUCCESS] 🎉 MemMachine is now running!

Service URLs:
  📊 MemMachine API Docs: http://localhost:8080/docs
  🗄️   Neo4j Browser: http://localhost:7474
  📈 Health Check: http://localhost:8080/api/v2/health
  📊 Metrics: http://localhost:8080/api/v2/metrics

Database Access:
  🐘 PostgreSQL: localhost:5432 (user: memmachine, db: memmachine)
  🔗 Neo4j Bolt: localhost:7687 (user: neo4j)

Useful Commands:
  📋 View logs: docker-compose logs -f
  🛑 Stop services: docker-compose down
  🔄 Restart: docker-compose restart
  🧹 Clean up: docker-compose down -v

Checklist

  • I have signed the commit(s) within this pull request
  • My code follows the style guidelines of this project (See STYLE_GUIDE.md)
  • I have performed a self-review of my own code

Maintainer Checklist

  • Confirmed all checks passed
  • Contributor has signed the commit(s)
  • Reviewed the code
  • Run, Tested, and Verified the change(s) work as expected

Since "host.docker.internal" is only available in docker desktop, it
needs to be explicitly added to docker-compose.yml for Linux users.

Fixes: MemMachine#918
Suggested-by: Steve Scargall <steve.scargall@memverge.com>
Signed-off-by: Honggyu Kim <honggyu.kim@sk.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables Linux users to run MemMachine with Ollama by making host.docker.internal available in the Docker Compose environment. The change adds the host.docker.internal hostname mapping to the memmachine service configuration.

Changes:

  • Added extra_hosts configuration to map host.docker.internal to the host gateway in docker-compose.yml

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@sscargal
Copy link
Contributor

Some research and information on the change to help the reviewers/maintainers.

What does the host.docker.internal host entry do?

  • host.docker.internal is a special DNS name that Docker has reserved to allow containers to easily communicate with services running on the host machine (the physical or VM running Docker).
  • When a container tries to connect to host.docker.internal, that request is routed back to the host’s network interface.

Why is it used?

  • In some setups, especially for development, you may want your containerized application to access resources on your host machine—such as a local database, API, or a development server.
  • It’s easier than hard-coding the host’s IP address, which can change between environments and network setups.
  • It provides a stable, predictable way for containers to reach host services regardless of the underlying IP addressing.

When is this needed or not needed?

  • Needed:
    • When your containerized app needs to connect to a service running on your host (e.g., local database, API, Redis instance on your laptop).
    • For development environments, integration testing, and debugging.
  • Not needed:
    • When all required services are within containers (using Docker networks for inter-container communication).
    • In production, as you usually don’t run dependent services on your Docker host; everything is within containers or managed internal endpoints.
    • When using cloud-based infrastructure, where host networking is typically not exposed to containers.

Platform Differences: Linux, Windows, macOS

Windows & macOS

  • host.docker.internal works out-of-the-box with Docker Desktop (since version 18.03+).
  • No extra configuration is required in your Docker Compose or Dockerfile.
  • Containers can reliably connect to the host machine using host.docker.internal.

Linux

  • Docker (without Docker Desktop) does not set up host.docker.internal by default.

  • This is due to fundamental differences in how Linux handles networking for containers.

  • If you want Linux containers to resolve host.docker.internal, you need to explicitly add an extra_hosts entry in your docker-compose.yml:

    extra_hosts:
      - "host.docker.internal:host-gateway"

    This will map the special host.docker.internal DNS name to the correct host gateway IP (supported in Docker 20.04+).

Impact when using docker compose:

  • Windows/macOS: Nothing changes, as host.docker.internal is always available.
  • Linux: Adding the extra_hosts entry makes host.docker.internal work the same as on Windows/macOS—a big benefit for cross-platform compatibility.
    • If not present on Linux, containers will not be able to resolve that name and will fail to connect to host services.
    • With this entry, development environments and scripts that expect host.docker.internal will work uniformly on all major OSes.

Summary

  • Adding the host entry for host.docker.internal in your Docker Compose setup makes containers running on Linux behave the same as on Windows and macOS when they need to reach services on the host.
  • This is mainly for development and testing scenarios where such access is required.
  • It ensures cross-platform compatibility and prevents configuration errors across different operating systems.
  • In production or when all services are containerized, this entry may not be needed.

Reference:

Copy link
Contributor

@sscargal sscargal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@sscargal sscargal changed the title Support host.docker.internal in Linux to docker-compose Support host.docker.internal in Linux for docker-compose Jan 12, 2026
@honggyukim
Copy link
Contributor Author

@sscargal Thanks for the summary. That explains well about this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: [ERROR] memmachine.common.embedder.openai_embedder

2 participants