diff --git a/nixos/doc/manual/containers.xml b/nixos/doc/manual/containers.xml
new file mode 100644
index 000000000000..b8f170fc614f
--- /dev/null
+++ b/nixos/doc/manual/containers.xml
@@ -0,0 +1,242 @@
+
+
+Containers
+
+NixOS allows you to easily run other NixOS instances as
+containers. Containers are a light-weight
+approach to virtualisation that runs software in the container at the
+same speed as in the host system. NixOS containers share the Nix store
+of the host, making container creation very efficient.
+
+Currently, NixOS containers are not perfectly isolated
+from the host system. This means that a user with root access to the
+container can do things that affect the host. So you should not give
+container root access to untrusted users.
+
+NixOS containers can be created in two ways: imperatively, using
+the command nixos-container, and declaratively, by
+specifying them in your configuration.nix. The
+declarative approach implies that containers get upgraded along with
+your host system when you run nixos-rebuild, which
+is often not what you want. By contrast, in the imperative approach,
+containers are configured and updated independently from the host
+system.
+
+
+Imperative container management
+
+We’ll cover imperative container management using
+nixos-container first. You create a container with
+identifier foo as follows:
+
+
+$ nixos-container create foo
+
+
+This creates the container’s root directory in
+/var/lib/containers/foo and a small configuration
+file in /etc/containers/foo.conf. It also builds
+the container’s initial system configuration and stores it in
+/nix/var/nix/profiles/per-container/foo/system. You
+can modify the initial configuration of the container on the command
+line. For instance, to create a container that has
+sshd running, with the given public key for
+root:
+
+
+$ nixos-container create foo --config 'services.openssh.enable = true; \
+ users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];'
+
+
+
+
+Creating a container does not start it. To start the container,
+run:
+
+
+$ nixos-container start foo
+
+
+This command will return as soon as the container has booted and has
+reached multi-user.target. On the host, the
+container runs within a systemd unit called
+container@container-name.service.
+Thus, if something went wrong, you can get status info using
+systemctl:
+
+
+$ systemctl status container@foo
+
+
+
+
+If the container has started succesfully, you can log in as
+root using the root-login operation:
+
+
+$ nixos-container root-login foo
+[root@foo:~]#
+
+
+Note that only root on the host can do this (since there is no
+authentication). You can also get a regular login prompt using the
+login operation, which is available to all users on
+the host:
+
+
+$ nixos-container login foo
+foo login: alice
+Password: ***
+
+
+With nixos-container run, you can execute arbitrary
+commands in the container:
+
+
+$ nixos-container run foo -- uname -a
+Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
+
+
+
+
+There are several ways to change the configuration of the
+container. First, on the host, you can edit
+/var/lib/container/name/etc/nixos/configuration.nix,
+and run
+
+
+$ nixos-container update foo
+
+
+This will build and activate the new configuration. You can also
+specify a new configuration on the command line:
+
+
+$ nixos-container update foo --config 'services.httpd.enable = true; \
+ services.httpd.adminAddr = "foo@example.org";'
+
+$ curl http://$(nixos-container show-ip foo)/
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
+
+
+However, note that this will overwrite the container’s
+/etc/nixos/configuration.nix.
+
+Alternatively, you can change the configuration from within the
+container itself by running nixos-rebuild switch
+inside the container. Note that the container by default does not have
+a copy of the NixOS channel, so you should run nix-channel
+--update first.
+
+Containers can be stopped and started using
+nixos-container stop and nixos-container
+start, respectively, or by using
+systemctl on the container’s service unit. To
+destroy a container, including its file system, do
+
+
+$ nixos-container destroy foo
+
+
+
+
+
+
+
+Declarative container specification
+
+You can also specify containers and their configuration in the
+host’s configuration.nix. For example, the
+following specifies that there shall be a container named
+database running PostgreSQL:
+
+
+containers.database =
+ { config =
+ { config, pkgs, ... }:
+ { services.postgresql.enable = true;
+ services.postgresql.package = pkgs.postgresql92;
+ };
+ };
+
+
+If you run nixos-rebuild switch, the container will
+be built and started. If the container was already running, it will be
+updated in place, without rebooting.
+
+By default, declarative containers share the network namespace
+of the host, meaning that they can listen on (privileged)
+ports. However, they cannot change the network configuration. You can
+give a container its own network as follows:
+
+
+containers.database =
+ { privateNetwork = true;
+ hostAddress = "192.168.100.10";
+ localAddress = "192.168.100.11";
+ };
+
+
+This gives the container a private virtual Ethernet interface with IP
+address 192.168.100.11, which is hooked up to a
+virtual Ethernet interface on the host with IP address
+192.168.100.10. (See the next section for details
+on container networking.)
+
+To disable the container, just remove it from
+configuration.nix and run nixos-rebuild
+switch. Note that this will not delete the root directory of
+the container in /var/lib/containers.
+
+
+
+
+Networking
+
+When you create a container using nixos-container
+create, it gets it own private IPv4 address in the range
+10.233.0.0/16. You can get the container’s IPv4
+address as follows:
+
+
+$ nixos-container show-ip foo
+10.233.4.2
+
+$ ping -c1 10.233.4.2
+64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
+
+
+
+
+Networking is implemented using a pair of virtual Ethernet
+devices. The network interface in the container is called
+eth0, while the matching interface in the host is
+called c-container-name
+(e.g., c-foo). The container has its own network
+namespace and the CAP_NET_ADMIN capability, so it
+can perform arbitrary network configuration such as setting up
+firewall rules, without affecting or having access to the host’s
+network.
+
+By default, containers cannot talk to the outside network. If
+you want that, you should set up Network Address Translation (NAT)
+rules on the host to rewrite container traffic to use your external
+IP address. This can be accomplished using the following configuration
+on the host:
+
+
+networking.nat.enable = true;
+networking.nat.internalInterfaces = ["c-+"];
+networking.nat.externalInterface = "eth0";
+
+where eth0 should be replaced with the desired
+external interface. Note that c-+ is a wildcard
+that matches all container interfaces.
+
+
+
+
+
+
diff --git a/nixos/doc/manual/manual.xml b/nixos/doc/manual/manual.xml
index f9775f4f0170..5753a8ff9e74 100644
--- a/nixos/doc/manual/manual.xml
+++ b/nixos/doc/manual/manual.xml
@@ -54,6 +54,7 @@
+
diff --git a/nixos/modules/virtualisation/containers.nix b/nixos/modules/virtualisation/containers.nix
index fbdd3f9034c6..c53bd7d3509d 100644
--- a/nixos/modules/virtualisation/containers.nix
+++ b/nixos/modules/virtualisation/containers.nix
@@ -281,6 +281,8 @@ in
'';
}) config.containers;
+ # FIXME: auto-start containers.
+
# Generate /etc/hosts entries for the containers.
networking.extraHosts = concatStrings (mapAttrsToList (name: cfg: optionalString (cfg.localAddress != null)
''