Example of the issue:
To load configuration parameters, Airbyte must first
docker pull the connector's image, which may be many hundreds of megabytes. Under poor connectivity conditions, the request to pull the image may take a very long time or time out. More context on this issue can be found here. If your Internet speed is less than 30Mbps down or are running bandwidth-consuming workloads concurrently with Airbyte, you may encounter this issue. Run a speed test to verify your internet speed.
One workaround is to manually pull the latest version of every connector you'll use then resetting Airbyte. Note that this will remove any configured connections, sources, or destinations you currently have in Airbyte. To do this:
Decide which connectors you'd like to use. For this example let's say you want the Postgres source and the Snowflake destination.
For each of the connectors you'd like to use, from your shell run
docker pull <repository>:<tag>, replacing
<tag> with the values copied from the step above e.g:
docker pull airbyte/source-postgres:0.1.6.
Once you've finished downloading all the images, from the Airbyte repository root run
docker-compose down -v followed by
The issue should be resolved.
Depending on your Docker network configuration, you may not be able to connect to
If you are running into connection refused errors when running Airbyte via Docker Compose on Mac, try using
host.docker.internal as the host. On Linux, you may have to modify
docker-compose.yml and add a host that maps to your local machine using
We’ve had that issue once. (no spinner & 500 http error). We don’t know why. Resolution: try to stop airbyte (
docker-compose down) & restart (
You receive the error below when you tried to sync a database with a lot of tables (6000 or more).
airbyte-scheduler | io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: grpc: received message larger than max (<NUMBER> vs. 4194304)
The workaround for this is trying to transfer the tables you really want to use to another namespace. If you need all tables you should split them into separate namespaces and try to use two connections.