knitr::opts_chunk$set(echo = TRUE)
Data is a business' life blood. For some companies, it's their entire value proposition. The generation, access, and retention of data is paramount. This yields a few rules of thumb:
Whenever you kill a container, you lose it's contents so data can't be stored in a container. So what's the point?
Docker containers can access a file share external to them.
This is a great way to persist data, especially if you use an external facility like Azure File Storage or Amazon S3 so they handle all infrastrastructure stuff.
# Create azure volumes docker volume create \ --name logs \ -d azurefile \ -o share=logs
docker run \ -v logs:/logs \ stephlocke/ddd-simplewrites
Why is this way bad?
Get a docker container up and running. This will initialise database files in the directory.
docker run \ -d -v dbs:/var/lib/mysql \ -p 6603:3306 \ --env="MYSQL_ROOT_PASSWORD=mypassword" \ --name mydb \ mysql
docker run \ -d -v dbs:/var/lib/mysql \ -p 6603:3306 \ --env="MYSQL_ROOT_PASSWORD=mypassword" \ --name mydb \ mysql
- Can we do this multiple times with mysql?
- What's the problem, even if we could?
Reference data can be stored in a number of ways:
To scale access, you need to avoid locks:
Keeping your data up and available:
Data needs to be secure, especially in a multi-tenant model:
Docker acquires Infinit, who've been building a distributed file system which Docker could utilise. Watch that space!
Read the Joyent piece on persisting data
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.