Instead of using docker-compose.yml I tend to specify a list of files to use in .env unfortunately this list has to be given in one line which could be tidier. (sometimes I list the files in a manifest and use a script to create the .env file)
if the .env file included
Then you could break things down a bit, and users can add their own.
The separation into different yaml files sounds like an interesting concept. I am still not sure how those files will be sourced or does .env provide a mechanism to load multiple other files without having them to specify with the docker-compose command?
I haven’t tried this yet, but wouldn’t the container start up anyway and then simply exit due to not having a startup command?
What I see is that the docker-compose concept is defining a composable, potentially heirarchically constructed system, but is not actually fit for that purpose. This was my best attempt, to use docker-compose in a composable manner out of the box. I thought it might fit your usecase, but whatever way you look at it its a kludge forced upon us.
My next idea would be to write a script that assembles the docker-compose.yml file from a set of components, and package that up as a wrapper around docker-compose. I was on this path when I found DB.
The DockerFile concept is simiarly broken, it should have one responsibility, defining how the container relates to the docker runtime. Since this can be defined in Docker-compose, the DockerFile itself serves no purpose! Building the contents of the container itself is a job for make, or other package management tools designed for that purpose. So I use one generic, essentially empty, DockerFile for all my own containers, passing the job of building stuff on to scripts invoked in the container build.
I have been working on a scheme whereby bash scripts with sub-commands (like git) can be assembled by composition. Then when you say command help, it generates help for the whole composition-tree. The same scheme could be adapted to build a docker compose file for a heirarchy of parts.
Do you have anything up on GitHub that is using this structure?
Not exactly sure about what you mean.
I was just taking your idea and adding a little on top of that.
In .env you define to auto-source plugins/cassandra/compose.yml which defines the full structure for CassandraDB. Now you will also have a Cassandra DB added to the Devilbox stack. However what is still missing is the integration with the currently bundled Intranet. There are for example info pages for MySQL databases, showing you created databases and its sizes and an additional page that shows what MySQL configuration has been applied. This obviously does not exist for CassandraDB.
The idea I had in mind is to have a second yaml definition that defines PHP code for specific services that will be run by the internet to show information. In the case of Cassandra you would define the code that displays Cassandra databases and Cassandra configuration information (Of course the current intranet must be adjusted in order to run/execute this file). Let’s assume that file is called intranet.yml
I have started a project for this idea. Have a look at the first commit, which just includes the composable idea.
If we adopt this (or something similar) as a convention for a .env file pre-processor, then we may be able to fix several other things that docker-compose maintainers apparently refuse to. You never know it might catch on.
haha, I really like the name of that project. Never thought of an antonym to it.
I think making a PHP-FPM plugin would be the very hardest plugin you could do. For that to work, each webserver container would also have to be aware of those PHP-FPM containers, so there needs to be a more general concept for them as well to allow for different upstream locations on demand. Mount points must match, user/group id inside container must match, etc. Hard, but not impossible. I guess before actually tackling this one, you should have a very clearly defined protocol that must be implemented by both: web-server and php-fpm. With that protocol being defined, one could also replace the php-fpm with python or anything else.
A more simple approach for a plugin system be a web-server independent container: DB, Cache, monitoring, etc. Even for that, there must be first a protocol defined, so that the intranet page is able to properly present this container. (At least for the plugin system).
I do like the approach for separation, but only in combination with extra value for the project, e.g. the plugin system. This one now really caught my attention, unfortunately I try not to explore this yet, as I first need to get rid of a few more painspots, before adding more features:
Everything as Volumes (except the projects) to satisfy Windows
Rebuilt AutoDNS to satisfy DockerToolbox and MacOS (and get rid of all socat port-forwards in PHP-FPM container)
Docker-sync integration to satisfy MacOS performance (drastically)
Time-wise all three could be achievable in Q1 with ordered difficulty of: 1, 3, 2
Once this is done, I guess (and hope) all pain points are resolved and I can actually go on with major features which is most likely going to be the plugin system with logical separation of container definition.
We should keep this idea active nevertheless and see what else pops out of our heads regarding this.
I am eager to see progress on the Angelbox especially in terms of separating configuration.
I am making slow but good progress with AngelBox, I was wondering if you might be interested in joining forces and use the idea of a Production Deployment version of DevilBox, i.e. AngelBox as a way to monetise this little corner of Open Source? If you would be up for a discussion about this email me directly on email@example.com