diff --git a/doc/builders/fetchers.chapter.md b/doc/builders/fetchers.chapter.md index 09a41cd9ce0b..30591c1673c8 100644 --- a/doc/builders/fetchers.chapter.md +++ b/doc/builders/fetchers.chapter.md @@ -10,7 +10,7 @@ For those who develop and maintain fetchers, a similar problem arises with chang ## `fetchurl` and `fetchzip` {#fetchurl} -Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below. +Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below. ```nix { stdenv, fetchurl }: @@ -24,9 +24,9 @@ stdenv.mkDerivation { } ``` -The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball. +The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball. -`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time. +`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time. Most other fetchers return a directory rather than a single file. @@ -38,9 +38,9 @@ Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha25 Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`. -Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout. +Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout. -If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more infomation: +If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information: ```nix { stdenv, fetchgit }: @@ -78,17 +78,17 @@ A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are m ## `fetchFromGitHub` {#fetchfromgithub} -`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred. +`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred. `fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options. ## `fetchFromGitLab` {#fetchfromgitlab} -This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above. +This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above. ## `fetchFromGitiles` {#fetchfromgitiles} -This is used with Gitiles repositories. The arguments expected are similar to fetchgit. +This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`. ## `fetchFromBitbucket` {#fetchfrombitbucket} @@ -96,11 +96,11 @@ This is used with BitBucket repositories. The arguments expected are very simila ## `fetchFromSavannah` {#fetchfromsavannah} -This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above. +This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above. ## `fetchFromRepoOrCz` {#fetchfromrepoorcz} -This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above. +This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above. ## `fetchFromSourcehut` {#fetchfromsourcehut} @@ -111,4 +111,4 @@ or "hg"), `domain` and `fetchSubmodules`. If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit` or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`, -respectively. Otherwise the fetcher uses `fetchzip`. +respectively. Otherwise, the fetcher uses `fetchzip`. diff --git a/doc/builders/images/dockertools.section.md b/doc/builders/images/dockertools.section.md index 7ff4b2aeb369..458b0b36720f 100644 --- a/doc/builders/images/dockertools.section.md +++ b/doc/builders/images/dockertools.section.md @@ -58,7 +58,7 @@ After the new layer has been created, its closure (to which `contents`, `config` At the end of the process, only one new single layer will be produced and added to the resulting image. -The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`. +The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`. It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute. @@ -87,7 +87,7 @@ pkgs.dockerTools.buildImage { } ``` -and now the Docker CLI will display a reasonable date and sort the images as expected: +Now the Docker CLI will display a reasonable date and sort the images as expected: ```ShellSession $ docker images @@ -95,7 +95,7 @@ REPOSITORY TAG IMAGE ID CREATED SIZE hello latest de2bf4786de6 About a minute ago 25.2MB ``` -however, the produced images will not be binary reproducible. +However, the produced images will not be binary reproducible. ## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage} @@ -119,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i `contents` _optional_ -: Top level paths in the container. Either a single derivation, or a list of derivations. +: Top-level paths in the container. Either a single derivation, or a list of derivations. *Default:* `[]` `config` _optional_ -: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). +: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). *Default:* `{}` @@ -195,9 +195,9 @@ pkgs.dockerTools.buildLayeredImage { Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images. -Modern Docker installations support up to 128 layers, however older versions support as few as 42. +Modern Docker installations support up to 128 layers, but older versions support as few as 42. -If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further. +If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further. The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration. @@ -213,7 +213,7 @@ The image produced by running the output script can be piped directly into `dock $(nix-build) | docker load ``` -Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry: +Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry: ```ShellSession $(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag diff --git a/doc/builders/images/ocitools.section.md b/doc/builders/images/ocitools.section.md index d3dee57ebac6..d3ab8776786b 100644 --- a/doc/builders/images/ocitools.section.md +++ b/doc/builders/images/ocitools.section.md @@ -1,10 +1,10 @@ # pkgs.ociTools {#sec-pkgs-ociTools} -`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container. +`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container. ## buildContainer {#ssec-pkgs-ociTools-buildContainer} -This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command. +This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command. The parameters of `buildContainer` with an example value are described below: @@ -30,7 +30,7 @@ buildContainer { } ``` -- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container +- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container. - `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs) diff --git a/doc/builders/images/snaptools.section.md b/doc/builders/images/snaptools.section.md index 5f710d2de7fe..259fa1b06180 100644 --- a/doc/builders/images/snaptools.section.md +++ b/doc/builders/images/snaptools.section.md @@ -33,7 +33,7 @@ in snapTools.makeSnap { ## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox} -Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package. +Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package. ``` {#ex-snapTools-buildSnap-firefox .nix} let diff --git a/doc/builders/packages/citrix.section.md b/doc/builders/packages/citrix.section.md index b25ecb0bdefc..4721f7e90f7a 100644 --- a/doc/builders/packages/citrix.section.md +++ b/doc/builders/packages/citrix.section.md @@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a ## Basic usage {#sec-citrix-base} -The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix. +The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix. -## Citrix Selfservice {#sec-citrix-selfservice} +## Citrix Self-service {#sec-citrix-selfservice} -The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions. +The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions. -In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this: +In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this: ```ShellSession $ storebrowse -C ~/Downloads/receiverconfig.cr @@ -19,7 +19,7 @@ $ selfservice ## Custom certificates {#sec-citrix-custom-certs} -The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`: +The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`: ```nix with import { config.allowUnfree = true; }; diff --git a/doc/builders/packages/eclipse.section.md b/doc/builders/packages/eclipse.section.md index faabb1884501..8cf7426833b8 100644 --- a/doc/builders/packages/eclipse.section.md +++ b/doc/builders/packages/eclipse.section.md @@ -8,9 +8,9 @@ Nixpkgs provides a number of packages that will install Eclipse in its various f $ nix-env -f '' -qaP -A eclipses --description ``` -Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse. +Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse. -If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add +If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add: ```nix packageOverrides = pkgs: { @@ -22,15 +22,15 @@ packageOverrides = pkgs: { } ``` -to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running +to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running: ```ShellSession $ nix-env -f '' -qaP -A eclipses.plugins --description ``` -If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively. +If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively. -Expanding the previous example with two plugins using the above functions we have +Expanding the previous example with two plugins using the above functions, we have: ```nix packageOverrides = pkgs: { diff --git a/doc/builders/packages/elm.section.md b/doc/builders/packages/elm.section.md index ae223c802da4..063dd73d9de4 100644 --- a/doc/builders/packages/elm.section.md +++ b/doc/builders/packages/elm.section.md @@ -1,6 +1,6 @@ # Elm {#sec-elm} -To start a development environment do +To start a development environment, run: ```ShellSession nix-shell -p elmPackages.elm elmPackages.elm-format diff --git a/doc/builders/packages/emacs.section.md b/doc/builders/packages/emacs.section.md index 577f1a23ce0e..a202606966c0 100644 --- a/doc/builders/packages/emacs.section.md +++ b/doc/builders/packages/emacs.section.md @@ -20,7 +20,7 @@ The Emacs package comes with some extra helpers to make it easier to configure. } ``` -You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts. +You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts. ```nix { @@ -101,9 +101,9 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t } ``` -This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command. +This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command. -Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`. +Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`. ```nix overrides = self: super: rec { diff --git a/doc/builders/packages/etc-files.section.md b/doc/builders/packages/etc-files.section.md index 2405a54634d8..94a769ed3355 100644 --- a/doc/builders/packages/etc-files.section.md +++ b/doc/builders/packages/etc-files.section.md @@ -1,10 +1,10 @@ # /etc files {#etc} -Certain calls in glibc require access to runtime files found in /etc such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function. +Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function. -On non-NixOS distributions these files are typically provided by packages (i.e. [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present. +On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present. -If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your _buildInputs_ then it will set the environment varaibles `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a _setup-hook_. +If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook. ```bash @@ -15,4 +15,4 @@ NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/e NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols ``` -Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variable are *not set*, then it will attempt to find the files at the default location within _/etc_. +Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`. diff --git a/doc/builders/packages/firefox.section.md b/doc/builders/packages/firefox.section.md index d6426981da7d..6f7d39c8b5e3 100644 --- a/doc/builders/packages/firefox.section.md +++ b/doc/builders/packages/firefox.section.md @@ -2,7 +2,7 @@ ## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies} -The `wrapFirefox` function allows to pass policies, preferences and extension that are available to Firefox. With the help of `fetchFirefoxAddon` this allows build a Firefox version that already comes with addons pre-installed: +The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed: ```nix { @@ -40,13 +40,12 @@ The `wrapFirefox` function allows to pass policies, preferences and extension th } ``` -If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile. -To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled) -or type into the Firefox url bar: `about:policies#documentation`. -Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings. +If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile. +To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled) +or type into the Firefox URL bar: `about:policies#documentation`. +Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings. ## Troubleshooting {#sec-firefox-troubleshooting} -If addons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for addons thus nix addons get disabled by the normal Firefox binary. - -If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode. +If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary. +If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode. diff --git a/doc/builders/packages/fish.section.md b/doc/builders/packages/fish.section.md index 3086bd68348f..85b57acd1090 100644 --- a/doc/builders/packages/fish.section.md +++ b/doc/builders/packages/fish.section.md @@ -36,7 +36,7 @@ using `buildFishPlugin` and running unit tests with the `fishtape` test runner. ## Fish wrapper {#sec-fish-wrapper} The `wrapFish` package is a wrapper around Fish which can be used to create -Fish shells initialised with some plugins as well as completions, configuration +Fish shells initialized with some plugins as well as completions, configuration snippets and functions sourced from the given paths. This provides a convenient way to test Fish plugins and scripts without having to alter the environment. diff --git a/doc/builders/packages/fuse.section.md b/doc/builders/packages/fuse.section.md index eb0023fcbc3e..6deea6b5626e 100644 --- a/doc/builders/packages/fuse.section.md +++ b/doc/builders/packages/fuse.section.md @@ -24,10 +24,10 @@ packages on macOS: checking for fuse.h... no configure: error: No fuse.h found. -This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or +This happens on autoconf based projects that use `AC_CHECK_HEADERS` or `AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package is included in `buildInputs`. It happens because libfuse headers throw an error -on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define +on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define `FUSE_USE_VERSION`, but only inside C source files. This results in the above error at configure time because the configure script would attempt to compile sample FUSE programs without defining `FUSE_USE_VERSION`. diff --git a/doc/builders/packages/ibus.section.md b/doc/builders/packages/ibus.section.md index 2ce85467bb86..1b09d3fbbab9 100644 --- a/doc/builders/packages/ibus.section.md +++ b/doc/builders/packages/ibus.section.md @@ -6,7 +6,7 @@ This package is an ibus-based completion method to speed up typing. IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html). -On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module: +On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module: ```nix { pkgs, ... }: { @@ -19,7 +19,7 @@ On NixOS you need to explicitly enable `ibus` with given engines before customiz ## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell} -The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this: +The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this: ```nix ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; } @@ -31,7 +31,7 @@ _Note: each language passed to `langs` must be an attribute name in `pkgs.hunspe The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed: -On NixOS it can be installed using the following expression: +On NixOS, it can be installed using the following expression: ```nix { pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; } diff --git a/doc/builders/packages/linux.section.md b/doc/builders/packages/linux.section.md index f669c720710c..b64da85791a0 100644 --- a/doc/builders/packages/linux.section.md +++ b/doc/builders/packages/linux.section.md @@ -4,7 +4,7 @@ The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/ke The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`). -The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package: +The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package: ```nix modulesTree = [kernel] @@ -14,19 +14,19 @@ modulesTree = [kernel] How to add a new (major) version of the Linux kernel to Nixpkgs: -1. Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it. +1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it. 2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`). 3. Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following: - 1. Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`). + 1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`). - 2. Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree. + 2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree. - 3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. don’t enable some feature on `i686` and disable it on `x86_64`). + 3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., don’t enable some feature on `i686` and disable it on `x86_64`). - 4. If needed you can also run `make menuconfig`: + 4. If needed, you can also run `make menuconfig`: ```ShellSession $ nix-env -f "" -iA ncurses @@ -34,7 +34,7 @@ How to add a new (major) version of the Linux kernel to Nixpkgs: $ make menuconfig ARCH=arch ``` - 5. Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`). + 5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`). 4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it. diff --git a/doc/builders/packages/locales.section.md b/doc/builders/packages/locales.section.md index e5a037004818..3a983f13a396 100644 --- a/doc/builders/packages/locales.section.md +++ b/doc/builders/packages/locales.section.md @@ -1,5 +1,5 @@ # Locales {#locales} -To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable. +To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable. -On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package. +On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package. diff --git a/doc/builders/packages/nginx.section.md b/doc/builders/packages/nginx.section.md index 154c21f9b369..0704b534e5f7 100644 --- a/doc/builders/packages/nginx.section.md +++ b/doc/builders/packages/nginx.section.md @@ -4,8 +4,8 @@ ## ETags on static files served from the Nix store {#sec-nginx-etag} -HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility). +HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility). -Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content. +Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content. As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior. diff --git a/doc/builders/packages/opengl.section.md b/doc/builders/packages/opengl.section.md index ee7f3af98cfc..f4d282267a07 100644 --- a/doc/builders/packages/opengl.section.md +++ b/doc/builders/packages/opengl.section.md @@ -12,4 +12,4 @@ The NixOS desktop or other non-headless configurations are the primary target fo If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs. -For proprietary video drivers you might have luck with also adding the corresponding video driver package. +For proprietary video drivers, you might have luck with also adding the corresponding video driver package. diff --git a/doc/builders/packages/shell-helpers.section.md b/doc/builders/packages/shell-helpers.section.md index 57b8619c5007..e7c2b0abebfc 100644 --- a/doc/builders/packages/shell-helpers.section.md +++ b/doc/builders/packages/shell-helpers.section.md @@ -4,7 +4,7 @@ Some packages provide the shell integration to be more useful. But unlike other - `fzf` : `fzf-share` -E.g. `fzf` can then used in the `.bashrc` like this: +E.g. `fzf` can then be used in the `.bashrc` like this: ```bash source "$(fzf-share)/completion.bash" diff --git a/doc/builders/packages/steam.section.md b/doc/builders/packages/steam.section.md index 3ce33c9b60ef..25728aa52aef 100644 --- a/doc/builders/packages/steam.section.md +++ b/doc/builders/packages/steam.section.md @@ -2,20 +2,20 @@ ## Steam in Nix {#sec-steam-nix} -Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME. +Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`. Nix problems and constraints: -- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`. +- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`. - We don't have the dynamic loader in `/lib`. -- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam. +- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam. - The steam binary cannot be patched, it's also checked. The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment. ## How to play {#sec-steam-play} -Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr. +Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller. ## Troubleshooting {#sec-steam-troub} @@ -32,7 +32,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a - **Using the FOSS Radeon or nouveau (nvidia) drivers** - The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore. - - Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error + - Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error: ``` steam.sh: line 713: 7842 Segmentation fault (core dumped) @@ -42,13 +42,13 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a - **Java** - 1. There is no java in steam chrootenv by default. If you get a message like + 1. There is no java in steam chrootenv by default. If you get a message like: ``` /home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found ``` - you need to add + you need to add: ```nix steam.override { withJava = true; }; @@ -56,7 +56,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a ## steam-run {#sec-steam-run} -The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with +The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with: ``` steam-run ./foo diff --git a/doc/builders/packages/urxvt.section.md b/doc/builders/packages/urxvt.section.md index 2d1196d92278..507feaa6fd86 100644 --- a/doc/builders/packages/urxvt.section.md +++ b/doc/builders/packages/urxvt.section.md @@ -4,7 +4,7 @@ Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator. ## Configuring urxvt {#sec-urxvt-conf} -In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as +In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as: ```nix rxvt-unicode.override { @@ -58,14 +58,14 @@ rxvt-unicode.override { ## Packaging urxvt plugins {#sec-urxvt-pkg} -Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`. +Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`. A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples. -If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough: +If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough: ```nix passthru.perlPackages = [ "self" ]; ``` -This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly. +This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly. diff --git a/doc/builders/packages/weechat.section.md b/doc/builders/packages/weechat.section.md index e4e956b908ed..767cc604ab45 100644 --- a/doc/builders/packages/weechat.section.md +++ b/doc/builders/packages/weechat.section.md @@ -1,6 +1,6 @@ -# Weechat {#sec-weechat} +# WeeChat {#sec-weechat} -Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as +WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as: ```nix weechat.override {configure = {availablePlugins, ...}: { @@ -13,7 +13,7 @@ If the `configure` function returns an attrset without the `plugins` attribute, The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`. -The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute: +The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute: ```nix weechat.override { configure = {availablePlugins, ...}: { @@ -49,7 +49,7 @@ weechat.override { Further values can be added to the list of commands when running `weechat --run-command "your-commands"`. -Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`: +Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`: ```nix weechat.override { @@ -64,7 +64,7 @@ weechat.override { } ``` -In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this: +In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this: ```nix { stdenv, fetchurl }: diff --git a/doc/builders/trivial-builders.chapter.md b/doc/builders/trivial-builders.chapter.md index 779a0a801b4e..c05511785bf5 100644 --- a/doc/builders/trivial-builders.chapter.md +++ b/doc/builders/trivial-builders.chapter.md @@ -35,10 +35,10 @@ This works just like `runCommand`. The only difference is that it also provides ## `runCommandLocal` {#trivial-builder-runCommandLocal} -Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build. +Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build. ::: {.note} -This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`. +This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`. ::: ## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText} @@ -219,5 +219,5 @@ produces an output path `/nix/store/-runtime-references` containing /nix/store/-hello-2.10 ``` -but none of `hello`'s dependencies, because those are not referenced directly +but none of `hello`'s dependencies because those are not referenced directly by `hi`'s output. diff --git a/maintainers/scripts/luarocks-packages.csv b/maintainers/scripts/luarocks-packages.csv index c8c8fb233d59..312aa0dad922 100644 --- a/maintainers/scripts/luarocks-packages.csv +++ b/maintainers/scripts/luarocks-packages.csv @@ -64,6 +64,7 @@ luasocket,,,,,, luasql-sqlite3,,,,,,vyp luassert,,,,,, luasystem,,,,,, +luaunbound,,,,, luautf8,,,,,,pstn luazip,,,,,, lua-yajl,,,,,,pstn diff --git a/nixos/doc/manual/from_md/release-notes/rl-2205.section.xml b/nixos/doc/manual/from_md/release-notes/rl-2205.section.xml index 10608685c471..1e3f269dafb2 100644 --- a/nixos/doc/manual/from_md/release-notes/rl-2205.section.xml +++ b/nixos/doc/manual/from_md/release-notes/rl-2205.section.xml @@ -509,6 +509,19 @@ /etc/containers. + + + For new installations + virtualisation.oci-containers.backend is + now set to podman by default. If you still + want to use Docker on systems where + system.stateVersion is set to to + "22.05" set + virtualisation.oci-containers.backend = "docker";.Old + systems with older stateVersions stay with + docker. + + security.klogd was removed. Logging of diff --git a/nixos/doc/manual/release-notes/rl-2205.section.md b/nixos/doc/manual/release-notes/rl-2205.section.md index 3b118d4e03d2..dcfabf01ff3d 100644 --- a/nixos/doc/manual/release-notes/rl-2205.section.md +++ b/nixos/doc/manual/release-notes/rl-2205.section.md @@ -164,6 +164,9 @@ In addition to numerous new and upgraded packages, this release has the followin This is to improve compatibility with `libcontainer` based software such as Podman and Skopeo which assumes they have ownership over `/etc/containers`. +- For new installations `virtualisation.oci-containers.backend` is now set to `podman` by default. + If you still want to use Docker on systems where `system.stateVersion` is set to to `"22.05"` set `virtualisation.oci-containers.backend = "docker";`.Old systems with older `stateVersion`s stay with "docker". + - `security.klogd` was removed. Logging of kernel messages is handled by systemd since Linux 3.5. diff --git a/nixos/modules/i18n/input-method/fcitx5.nix b/nixos/modules/i18n/input-method/fcitx5.nix index 6fea28e22345..b4b887606e95 100644 --- a/nixos/modules/i18n/input-method/fcitx5.nix +++ b/nixos/modules/i18n/input-method/fcitx5.nix @@ -5,7 +5,9 @@ with lib; let im = config.i18n.inputMethod; cfg = im.fcitx5; - fcitx5Package = pkgs.fcitx5-with-addons.override { inherit (cfg) addons; }; + addons = cfg.addons ++ optional cfg.enableRimeData pkgs.rime-data; + fcitx5Package = pkgs.fcitx5-with-addons.override { inherit addons; }; + whetherRimeDataDir = any (p: p.pname == "fcitx5-rime") cfg.addons; in { options = { i18n.inputMethod.fcitx5 = { @@ -17,16 +19,29 @@ in { Enabled Fcitx5 addons. ''; }; + + enableRimeData = mkEnableOption "default rime-data with fcitx5-rime"; }; }; config = mkIf (im.enabled == "fcitx5") { i18n.inputMethod.package = fcitx5Package; - environment.variables = { - GTK_IM_MODULE = "fcitx"; - QT_IM_MODULE = "fcitx"; - XMODIFIERS = "@im=fcitx"; - }; + environment = mkMerge [{ + variables = { + GTK_IM_MODULE = "fcitx"; + QT_IM_MODULE = "fcitx"; + XMODIFIERS = "@im=fcitx"; + }; + } + (mkIf whetherRimeDataDir { + pathsToLink = [ + "/share/rime-data" + ]; + + variables = { + NIX_RIME_DATA_DIR = "/run/current-system/sw/share/rime-data"; + }; + })]; }; } diff --git a/nixos/modules/services/networking/prosody.nix b/nixos/modules/services/networking/prosody.nix index 42596ccfefd9..7920e4b26345 100644 --- a/nixos/modules/services/networking/prosody.nix +++ b/nixos/modules/services/networking/prosody.nix @@ -820,6 +820,7 @@ in '') cfg.muc} ${ lib.optionalString (cfg.uploadHttp != null) '' + -- TODO: think about migrating this to mod-http_file_share instead. Component ${toLua cfg.uploadHttp.domain} "http_upload" http_upload_file_size_limit = ${cfg.uploadHttp.uploadFileSizeLimit} http_upload_expire_after = ${cfg.uploadHttp.uploadExpireAfter} diff --git a/nixos/modules/system/boot/systemd/shutdown.nix b/nixos/modules/system/boot/systemd/shutdown.nix index 63e1751f9b41..ca4cdf827d95 100644 --- a/nixos/modules/system/boot/systemd/shutdown.nix +++ b/nixos/modules/system/boot/systemd/shutdown.nix @@ -44,7 +44,7 @@ in { ]; }; - path = [pkgs.util-linux pkgs.makeInitrdNGTool pkgs.glibc pkgs.patchelf]; + path = [pkgs.util-linux pkgs.makeInitrdNGTool]; serviceConfig.Type = "oneshot"; script = '' mkdir -p /run/initramfs diff --git a/nixos/modules/virtualisation/oci-containers.nix b/nixos/modules/virtualisation/oci-containers.nix index f40481727830..fa5fe9973044 100644 --- a/nixos/modules/virtualisation/oci-containers.nix +++ b/nixos/modules/virtualisation/oci-containers.nix @@ -338,11 +338,7 @@ in { backend = mkOption { type = types.enum [ "podman" "docker" ]; - default = - # TODO: Once https://github.com/NixOS/nixpkgs/issues/77925 is resolved default to podman - # if versionAtLeast config.system.stateVersion "20.09" then "podman" - # else "docker"; - "docker"; + default = if versionAtLeast config.system.stateVersion "22.05" then "podman" else "docker"; description = "The underlying Docker implementation to use."; }; diff --git a/nixos/tests/systemd-initrd-simple.nix b/nixos/tests/systemd-initrd-simple.nix index 959cc87c0f26..5d98114304b7 100644 --- a/nixos/tests/systemd-initrd-simple.nix +++ b/nixos/tests/systemd-initrd-simple.nix @@ -1,7 +1,7 @@ import ./make-test-python.nix ({ lib, pkgs, ... }: { name = "systemd-initrd-simple"; - machine = { pkgs, ... }: { + nodes.machine = { pkgs, ... }: { boot.initrd.systemd = { enable = true; emergencyAccess = true; diff --git a/nixos/tests/xmpp/xmpp-sendmessage.nix b/nixos/tests/xmpp/xmpp-sendmessage.nix index 47a77f524c6a..80dfcff2d0eb 100644 --- a/nixos/tests/xmpp/xmpp-sendmessage.nix +++ b/nixos/tests/xmpp/xmpp-sendmessage.nix @@ -51,11 +51,8 @@ class CthonTest(ClientXMPP): log.info('Message sent') # Test http upload (XEP_0363) - def timeout_callback(arg): - log.error("ERROR: Cannot upload file. XEP_0363 seems broken") - sys.exit(1) try: - url = await self['xep_0363'].upload_file("${dummyFile}",timeout=10, timeout_callback=timeout_callback) + url = await self['xep_0363'].upload_file("${dummyFile}",timeout=10) except: log.error("ERROR: Cannot run upload command. XEP_0363 seems broken") sys.exit(1) diff --git a/pkgs/applications/networking/gmailctl/default.nix b/pkgs/applications/networking/gmailctl/default.nix index 9fc1e25a92d7..e1ce1914db92 100644 --- a/pkgs/applications/networking/gmailctl/default.nix +++ b/pkgs/applications/networking/gmailctl/default.nix @@ -6,18 +6,16 @@ buildGoModule rec { pname = "gmailctl"; - # on an unstable version because of https://github.com/mbrt/gmailctl/issues/232 - # and https://github.com/mbrt/gmailctl/commit/484bb689866987580e0576165180ef06375a543f - version = "unstable-2022-03-24"; + version = "0.10.2"; src = fetchFromGitHub { owner = "mbrt"; repo = "gmailctl"; - rev = "484bb689866987580e0576165180ef06375a543f"; - sha256 = "sha256-hIoS64QEDJ1qq3KJ2H8HjgQl8SxuIo+xz7Ot8CdjjQA="; + rev = "v${version}"; + sha256 = "sha256-tj+jKJuKwuqic/qfaUbf+Tao1X2FW0VVoGwqyx3q+go="; }; - vendorSha256 = "sha256-KWM20a38jZ3/a45313kxY2LaCQyiNMEdfdIV78phrBo="; + vendorSha256 = "sha256-aBw9C488a3Wxde3QCCU0eiagiRYOS9mkjcCsB2Mrdr0="; nativeBuildInputs = [ installShellFiles diff --git a/pkgs/applications/networking/instant-messengers/teams/default.nix b/pkgs/applications/networking/instant-messengers/teams/default.nix index 54eea2f1a245..684fb8454d67 100644 --- a/pkgs/applications/networking/instant-messengers/teams/default.nix +++ b/pkgs/applications/networking/instant-messengers/teams/default.nix @@ -134,7 +134,7 @@ let src = fetchurl { url = "https://statics.teams.cdn.office.net/production-osx/${version}/Teams_osx.pkg"; - sha256 = "1mg6a3b3954w4xy5rlcrwxczymygl61dv2rxqp45sjcsh3hp39q0"; + hash = "sha256-vLUEvOSBUyAJIWHOAIkTqTW/W6TkgmeyRzQbquZP810="; }; buildInputs = [ xar cpio makeWrapper ]; diff --git a/pkgs/applications/video/vdr/plugins.nix b/pkgs/applications/video/vdr/plugins.nix index f7eb5f201e85..e8675263720c 100644 --- a/pkgs/applications/video/vdr/plugins.nix +++ b/pkgs/applications/video/vdr/plugins.nix @@ -238,94 +238,19 @@ in { }; }; - fritzbox = let - libconvpp = stdenv.mkDerivation { - name = "jowi24-libconv++-20130216"; - propagatedBuildInputs = [ libiconv ]; - CXXFLAGS = "-std=gnu++11 -Os"; - src = fetchFromGitHub { - owner = "jowi24"; - repo = "libconvpp"; - rev = "90769b2216bc66c5ea5e41a929236c20d367c63b"; - sha256 = "0bf0dwxrzd42l84p8nxcsjdk1gvzlhad93nsbn97z6kr61n4cr33"; - }; - installPhase = '' - mkdir -p $out/lib $out/include/libconv++ - cp source.a $out/lib/libconv++.a - cp *.h $out/include/libconv++ - ''; - }; - - liblogpp = stdenv.mkDerivation { - name = "jowi24-liblogpp-20130216"; - CXXFLAGS = "-std=gnu++11 -Os"; - src = fetchFromGitHub { - owner = "jowi24"; - repo = "liblogpp"; - rev = "eee4046d2ae440974bcc8ceec00b069f0a2c62b9"; - sha256 = "01aqvwmwh5kk3mncqpim8llwha9gj5qq0c4cvqfn4h8wqi3d9l3p"; - }; - installPhase = '' - mkdir -p $out/lib $out/include/liblog++ - cp source.a $out/lib/liblog++.a - cp *.h $out/include/liblog++ - ''; - }; - - libnetpp = stdenv.mkDerivation { - name = "jowi24-libnet++-20180628"; - CXXFLAGS = "-std=gnu++11 -Os"; - src = fetchFromGitHub { - owner = "jowi24"; - repo = "libnetpp"; - rev = "212847f0efaeffee8422059b8e202d844174aaf3"; - sha256 = "0vjl6ld6aj25rzxm26yjv3h2gy7gp7qnbinpw6sf1shg2xim9x0b"; - }; - installPhase = '' - mkdir -p $out/lib $out/include/libnet++ - cp source.a $out/lib/libnet++.a - cp *.h $out/include/libnet++ - ''; - buildInputs = [ boost liblogpp libconvpp ]; - }; - - libfritzpp = stdenv.mkDerivation { - name = "jowi24-libfritzpp-20131201"; - CXXFLAGS = "-std=gnu++11 -Os"; - src = fetchFromGitHub { - owner = "jowi24"; - repo = "libfritzpp"; - rev = "ca19013c9451cbac7a90155b486ea9959ced0f67"; - sha256 = "0jk93zm3qzl9z96gfs6xl1c8ip8lckgbzibf7jay7dbgkg9kyjfg"; - }; - installPhase = '' - mkdir -p $out/lib $out/include/libfritz++ - cp source.a $out/lib/libfritz++.a - cp *.h $out/include/libfritz++ - ''; - propagatedBuildInputs = [ libgcrypt ]; - buildInputs = [ boost liblogpp libconvpp libnetpp ]; - }; - - in stdenv.mkDerivation rec { + fritzbox = stdenv.mkDerivation rec { pname = "vdr-fritzbox"; - version = "1.5.3"; + version = "1.5.4"; src = fetchFromGitHub { owner = "jowi24"; repo = "vdr-fritz"; rev = version; - sha256 = "0wab1kyma9jzhm6j33cv9hd2a5d1334ghgdi2051nmr1bdcfcsw8"; + sha256 = "sha256-DGD73i+ZHFgtCo+pMj5JaMovvb5vS1x20hmc5t29//o="; + fetchSubmodules = true; }; - postUnpack = '' - cp ${libfritzpp}/lib/* $sourceRoot/libfritz++ - cp ${liblogpp}/lib/* $sourceRoot/liblog++ - cp ${libnetpp}/lib/* $sourceRoot/libnet++ - cp ${libconvpp}/lib/* $sourceRoot/libconv++ - ''; - - buildInputs = [ vdr boost libconvpp libfritzpp libnetpp liblogpp ]; + buildInputs = [ vdr boost libgcrypt ]; installFlags = [ "DESTDIR=$(out)" ]; diff --git a/pkgs/build-support/kernel/make-initrd-ng-tool.nix b/pkgs/build-support/kernel/make-initrd-ng-tool.nix index 66ffc09d43cf..654b10367812 100644 --- a/pkgs/build-support/kernel/make-initrd-ng-tool.nix +++ b/pkgs/build-support/kernel/make-initrd-ng-tool.nix @@ -1,4 +1,4 @@ -{ rustPlatform }: +{ rustPlatform, lib, makeWrapper, patchelf, glibc, binutils }: rustPlatform.buildRustPackage { pname = "make-initrd-ng"; @@ -6,4 +6,11 @@ rustPlatform.buildRustPackage { src = ./make-initrd-ng; cargoLock.lockFile = ./make-initrd-ng/Cargo.lock; + + nativeBuildInputs = [ makeWrapper ]; + + postInstall = '' + wrapProgram $out/bin/make-initrd-ng \ + --prefix PATH : ${lib.makeBinPath [ patchelf glibc binutils ]} + ''; } diff --git a/pkgs/build-support/kernel/make-initrd-ng.nix b/pkgs/build-support/kernel/make-initrd-ng.nix index 1890bbcd173a..5f0a70f8a969 100644 --- a/pkgs/build-support/kernel/make-initrd-ng.nix +++ b/pkgs/build-support/kernel/make-initrd-ng.nix @@ -8,7 +8,7 @@ let # compression type and filename extension. compressorName = fullCommand: builtins.elemAt (builtins.match "([^ ]*/)?([^ ]+).*" fullCommand) 1; in -{ stdenvNoCC, perl, cpio, ubootTools, lib, pkgsBuildHost, makeInitrdNGTool, patchelf, runCommand, glibc +{ stdenvNoCC, perl, cpio, ubootTools, lib, pkgsBuildHost, makeInitrdNGTool, patchelf, runCommand # Name of the derivation (not of the resulting file!) , name ? "initrd" @@ -72,7 +72,7 @@ in passAsFile = ["contents"]; contents = lib.concatMapStringsSep "\n" ({ object, symlink, ... }: "${object}\n${if symlink == null then "" else symlink}") contents + "\n"; - nativeBuildInputs = [makeInitrdNGTool patchelf glibc cpio] ++ lib.optional makeUInitrd ubootTools; + nativeBuildInputs = [makeInitrdNGTool patchelf cpio] ++ lib.optional makeUInitrd ubootTools; } '' mkdir ./root make-initrd-ng "$contentsPath" ./root diff --git a/pkgs/build-support/kernel/make-initrd-ng/src/main.rs b/pkgs/build-support/kernel/make-initrd-ng/src/main.rs index 1342734590f7..294c570a3741 100644 --- a/pkgs/build-support/kernel/make-initrd-ng/src/main.rs +++ b/pkgs/build-support/kernel/make-initrd-ng/src/main.rs @@ -6,7 +6,7 @@ use std::hash::Hash; use std::io::{BufReader, BufRead, Error, ErrorKind}; use std::os::unix; use std::path::{Component, Path, PathBuf}; -use std::process::{Command, Stdio}; +use std::process::Command; struct NonRepeatingQueue { queue: VecDeque, @@ -42,7 +42,6 @@ fn patch_elf, P: AsRef>(mode: S, path: P) -> Result, P: AsRef>(mode: S, path: P) -> Result + AsRef, S: AsRef>( +fn copy_file + AsRef, S: AsRef + AsRef>( source: P, target: S, queue: &mut NonRepeatingQueue>, ) -> Result<(), Error> { - fs::copy(&source, target)?; + fs::copy(&source, &target)?; if !Command::new("ldd").arg(&source).output()?.status.success() { - //stdout(Stdio::inherit()).stderr(Stdio::inherit()). - println!("{:?} is not dynamically linked. Not recursing.", OsStr::new(&source)); + // Not dynamically linked - no need to recurse return Ok(()); } @@ -91,6 +89,17 @@ fn copy_file + AsRef, S: AsRef>( } } + // Make file writable to strip it + let mut permissions = fs::metadata(&target)?.permissions(); + permissions.set_readonly(false); + fs::set_permissions(&target, permissions)?; + + // Strip further than normal + if !Command::new("strip").arg("--strip-all").arg(OsStr::new(&target)).output()?.status.success() { + println!("{:?} was not successfully stripped.", OsStr::new(&target)); + } + + Ok(()) } @@ -200,7 +209,6 @@ fn main() -> Result<(), Error> { } } while let Some(obj) = queue.pop_front() { - println!("{:?}", obj); handle_path(out_path, &*obj, &mut queue)?; } diff --git a/pkgs/development/compilers/solc/default.nix b/pkgs/development/compilers/solc/default.nix index 9ad3cf77dc4d..6594872a258f 100644 --- a/pkgs/development/compilers/solc/default.nix +++ b/pkgs/development/compilers/solc/default.nix @@ -1,4 +1,5 @@ { lib, gccStdenv, fetchzip +, pkgs , boost , cmake , coreutils @@ -41,9 +42,17 @@ let sha256 = "1mnvxqsan034d2jiqnw2yvkljl7lwvhakmj5bscwp1fpkn655bbw"; }; - solc = gccStdenv.mkDerivation rec { - pname = "solc"; - version = "0.8.13"; + pname = "solc"; + version = "0.8.13"; + meta = with lib; { + description = "Compiler for Ethereum smart contract language Solidity"; + homepage = "https://github.com/ethereum/solidity"; + license = licenses.gpl3; + maintainers = with maintainers; [ dbrock akru lionello sifmelcara ]; + }; + + solc = if gccStdenv.isLinux then gccStdenv.mkDerivation rec { + inherit pname version meta; # upstream suggests avoid using archive generated by github src = fetchzip { @@ -105,13 +114,24 @@ let passthru.tests = { solcWithTests = solc.overrideAttrs (attrs: { doCheck = true; }); }; + } else gccStdenv.mkDerivation rec { + inherit pname version meta; - meta = with lib; { - description = "Compiler for Ethereum smart contract language Solidity"; - homepage = "https://github.com/ethereum/solidity"; - license = licenses.gpl3; - maintainers = with maintainers; [ dbrock akru lionello sifmelcara ]; + src = pkgs.fetchurl { + url = "https://github.com/ethereum/solidity/releases/download/v${version}/solc-macos"; + sha256 = "sha256-FNTvAT6oKtlekf2Um3+nt4JxpIP/GnnEPWzFi4JvW+o="; }; + dontUnpack = true; + + installPhase = '' + runHook preInstall + + mkdir -p $out/bin + cp ${src} $out/bin/solc + chmod +x $out/bin/solc + + runHook postInstall + ''; }; in solc diff --git a/pkgs/development/coq-modules/metacoq/default.nix b/pkgs/development/coq-modules/metacoq/default.nix new file mode 100644 index 000000000000..583d8b7adb91 --- /dev/null +++ b/pkgs/development/coq-modules/metacoq/default.nix @@ -0,0 +1,76 @@ +{ lib, which, fetchzip, + mkCoqDerivation, recurseIntoAttrs, single ? false, + coqPackages, coq, equations, version ? null }@args: +with builtins // lib; +let + repo = "metacoq"; + owner = "MetaCoq"; + defaultVersion = with versions; switch coq.coq-version [ + { case = "8.11"; out = "1.0-beta2-8.11"; } + { case = "8.12"; out = "1.0-beta2-8.12"; } + # Do not provide 8.13 because it does not compile with equations 1.3 provided by default (only 1.2.3) + # { case = "8.13"; out = "1.0-beta2-8.13"; } + ] null; + release = { + "1.0-beta2-8.11".sha256 = "sha256-I9YNk5Di6Udvq5/xpLSNflfjRyRH8fMnRzbo3uhpXNs="; + "1.0-beta2-8.12".sha256 = "sha256-I8gpmU9rUQJh0qfp5KOgDNscVvCybm5zX4TINxO1TVA="; + "1.0-beta2-8.13".sha256 = "sha256-IC56/lEDaAylUbMCfG/3cqOBZniEQk8jmI053DBO5l8="; + }; + releaseRev = v: "v${v}"; + + # list of core metacoq packages sorted by dependency order + packages = [ "template-coq" "pcuic" "safechecker" "erasure" "all" ]; + + template-coq = metacoq_ "template-coq"; + + metacoq_ = package: let + metacoq-deps = if package == "single" then [] + else map metacoq_ (head (splitList (pred.equal package) packages)); + pkgpath = if package == "single" then "./" else "./${package}"; + pname = if package == "all" then "metacoq" else "metacoq-${package}"; + pkgallMake = '' + mkdir all + echo "all:" > all/Makefile + echo "install:" >> all/Makefile + '' ; + derivation = mkCoqDerivation ({ + inherit version pname defaultVersion release releaseRev repo owner; + + extraNativeBuildInputs = [ which ]; + mlPlugin = true; + extraBuildInputs = [ coq.ocamlPackages.zarith ]; + propagatedBuildInputs = [ equations ] ++ metacoq-deps; + + patchPhase = '' + patchShebangs ./configure.sh + patchShebangs ./template-coq/update_plugin.sh + patchShebangs ./template-coq/gen-src/to-lower.sh + patchShebangs ./pcuic/clean_extraction.sh + patchShebangs ./safechecker/clean_extraction.sh + patchShebangs ./erasure/clean_extraction.sh + echo "CAMLFLAGS+=-w -60 # Unused module" >> ./safechecker/Makefile.plugin.local + sed -i -e 's/mv $i $newi;/mv $i tmp; mv tmp $newi;/' ./template-coq/gen-src/to-lower.sh ./pcuic/clean_extraction.sh ./safechecker/clean_extraction.sh ./erasure/clean_extraction.sh + '' ; + + configurePhase = optionalString (package == "all") pkgallMake + '' + touch ${pkgpath}/metacoq-config + '' + optionalString (elem package ["safechecker" "erasure"]) '' + echo "-I ${template-coq}/lib/coq/${coq.coq-version}/user-contrib/MetaCoq/Template/" > ${pkgpath}/metacoq-config + '' + optionalString (package == "single") '' + ./configure.sh local + ''; + + preBuild = '' + cd ${pkgpath} + '' ; + + meta = { + homepage = "https://metacoq.github.io/"; + license = licenses.mit; + maintainers = with maintainers; [ cohencyril ]; + }; + } // optionalAttrs (package != "single") + { passthru = genAttrs packages metacoq_; }); + in derivation; +in +metacoq_ (if single then "single" else "all") diff --git a/pkgs/development/libraries/glibmm/default.nix b/pkgs/development/libraries/glibmm/default.nix index f409935372ee..8ba33b98634e 100644 --- a/pkgs/development/libraries/glibmm/default.nix +++ b/pkgs/development/libraries/glibmm/default.nix @@ -2,11 +2,11 @@ stdenv.mkDerivation rec { pname = "glibmm"; - version = "2.66.2"; + version = "2.66.3"; src = fetchurl { url = "mirror://gnome/sources/${pname}/${lib.versions.majorMinor version}/${pname}-${version}.tar.xz"; - sha256 = "sha256-sqTNe5rph3lMu1ob7MEM7LZRgrm7hBhoYl1ruxI+2x0="; + sha256 = "sha256-r7liAkkUhdP0QQLZghmhctotP563j848+5JVm6SW5Jk="; }; outputs = [ "out" "dev" ]; diff --git a/pkgs/development/libraries/zimg/default.nix b/pkgs/development/libraries/zimg/default.nix index 38b106d474b7..475ebc7517e5 100644 --- a/pkgs/development/libraries/zimg/default.nix +++ b/pkgs/development/libraries/zimg/default.nix @@ -2,13 +2,13 @@ stdenv.mkDerivation rec { pname = "zimg"; - version = "3.0.3"; + version = "3.0.4"; src = fetchFromGitHub { owner = "sekrit-twc"; repo = "zimg"; rev = "release-${version}"; - sha256 = "0pwgf1mybpa3fs13p6jryzm32vfldyql9biwaypqdcimlnlmyk20"; + sha256 = "1069x49l7kh1mqcq1h3f0m5j0h832jp5x230bh4c613ymgg5kn00"; }; nativeBuildInputs = [ autoreconfHook ]; diff --git a/pkgs/development/lua-modules/generated-packages.nix b/pkgs/development/lua-modules/generated-packages.nix index 8fd6543b27d5..2089cdea46f3 100644 --- a/pkgs/development/lua-modules/generated-packages.nix +++ b/pkgs/development/lua-modules/generated-packages.nix @@ -1952,6 +1952,31 @@ buildLuarocksPackage { }; }) {}; +luaunbound = callPackage({ buildLuarocksPackage, luaOlder, luaAtLeast +, fetchurl, lua +}: +buildLuarocksPackage { + pname = "luaunbound"; + version = "1.0.0-1"; + knownRockspec = (fetchurl { + url = "https://luarocks.org/luaunbound-1.0.0-1.rockspec"; + sha256 = "1zlkibdwrj5p97nhs33cz8xx0323z3kiq5x7v0h3i7v6j0h8ppvn"; + }).outPath; + src = fetchurl { + url = "https://code.zash.se/dl/luaunbound/luaunbound-1.0.0.tar.gz"; + sha256 = "1lsh0ylp5xskygxl5qdv6mhkm1x8xp0vfd5prk5hxkr19jk5mr3d"; + }; + + disabled = with lua; (luaOlder "5.1") || (luaAtLeast "5.5"); + propagatedBuildInputs = [ lua ]; + + meta = { + homepage = "https://www.zash.se/luaunbound.html"; + description = "A binding to libunbound"; + license.fullName = "MIT"; + }; +}) {}; + luautf8 = callPackage({ buildLuarocksPackage, luaOlder, luaAtLeast , fetchurl, lua }: diff --git a/pkgs/development/lua-modules/overrides.nix b/pkgs/development/lua-modules/overrides.nix index 1411038e0c7f..cc179f0b9463 100644 --- a/pkgs/development/lua-modules/overrides.nix +++ b/pkgs/development/lua-modules/overrides.nix @@ -254,6 +254,12 @@ with prev; ]; }); + luaunbound = prev.lib.overrideLuarocks prev.luaunbound(drv: { + externalDeps = [ + { name = "libunbound"; dep = pkgs.unbound; } + ]; + }); + luuid = (prev.lib.overrideLuarocks prev.luuid (drv: { externalDeps = [ { name = "LIBUUID"; dep = pkgs.libuuid; } diff --git a/pkgs/development/python-modules/ansible-later/default.nix b/pkgs/development/python-modules/ansible-later/default.nix index e8f40a109ab8..8ad96e78bcfb 100644 --- a/pkgs/development/python-modules/ansible-later/default.nix +++ b/pkgs/development/python-modules/ansible-later/default.nix @@ -21,7 +21,7 @@ buildPythonPackage rec { pname = "ansible-later"; - version = "2.0.11"; + version = "2.0.12"; format = "pyproject"; disabled = pythonOlder "3.8"; @@ -30,7 +30,7 @@ buildPythonPackage rec { owner = "thegeeklab"; repo = pname; rev = "refs/tags/v${version}"; - hash = "sha256-K4GResTKKWXQ0OHpBwqTLnptQ8ipuQ9iaGZDlPqRUaI="; + hash = "sha256-0N/BER7tV8Hv1pvHaf/46BKnzZfHBGuEaPPex/CDQe0="; }; nativeBuildInputs = [ diff --git a/pkgs/development/python-modules/filetype/default.nix b/pkgs/development/python-modules/filetype/default.nix index 3c777d828b48..1a85a61f6426 100644 --- a/pkgs/development/python-modules/filetype/default.nix +++ b/pkgs/development/python-modules/filetype/default.nix @@ -1,21 +1,41 @@ { lib , buildPythonPackage , fetchPypi -, python +, pytestCheckHook +, pythonOlder }: buildPythonPackage rec { pname = "filetype"; - version = "1.0.10"; + version = "1.0.13"; + format = "setuptools"; + + disabled = pythonOlder "3.7"; src = fetchPypi { inherit pname version; - sha256 = "sha256-MjoTUAcxtsZaJTvDkwu86aVt+6cekLYP/ZaKtp2a6Tc="; + hash = "sha256-ahBHYv6T11XJYqqWyz2TCkj5GgdhBHEmxe6tIVPjOwM="; }; - checkPhase = '' - ${python.interpreter} -m unittest discover - ''; + checkInputs = [ + pytestCheckHook + ]; + + pythonImportsCheck = [ + "filetype" + ]; + + disabledTests = [ + # https://github.com/h2non/filetype.py/issues/119 + "test_guess_memoryview" + "test_guess_extension_memoryview" + "test_guess_mime_memoryview" + ]; + + disabledTestPaths = [ + # We don't care about benchmarks + "tests/test_benchmark.py" + ]; meta = with lib; { description = "Infer file type and MIME type of any file/buffer"; diff --git a/pkgs/development/python-modules/flask-jwt-extended/default.nix b/pkgs/development/python-modules/flask-jwt-extended/default.nix index 3b9c9b4a0678..0d99a08ab17a 100644 --- a/pkgs/development/python-modules/flask-jwt-extended/default.nix +++ b/pkgs/development/python-modules/flask-jwt-extended/default.nix @@ -1,20 +1,41 @@ -{ lib, buildPythonPackage, fetchPypi, python-dateutil, flask, pyjwt, werkzeug, pytest }: +{ lib +, buildPythonPackage +, fetchPypi +, flask +, pyjwt +, pytestCheckHook +, python-dateutil +, pythonOlder +, werkzeug +}: buildPythonPackage rec { - pname = "Flask-JWT-Extended"; - version = "4.3.1"; + pname = "flask-jwt-extended"; + version = "4.4.0"; + format = "setuptools"; + + disabled = pythonOlder "3.7"; src = fetchPypi { - inherit pname version; - sha256 = "ad6977b07c54e51c13b5981afc246868b9901a46715d9b9827898bfd916aae88"; + pname = "Flask-JWT-Extended"; + inherit version; + hash = "sha256-P+gVBL3JGtjxy5db0tlexgElHzG94YQRXjn8fm7SPqY="; }; - propagatedBuildInputs = [ python-dateutil flask pyjwt werkzeug ]; - checkInputs = [ pytest ]; + propagatedBuildInputs = [ + flask + pyjwt + python-dateutil + werkzeug + ]; - checkPhase = '' - pytest tests/ - ''; + checkInputs = [ + pytestCheckHook + ]; + + pythonImportsCheck = [ + "flask_jwt_extended" + ]; meta = with lib; { description = "JWT extension for Flask"; diff --git a/pkgs/development/python-modules/proton-client/0001-OpenSSL-path-fix.patch b/pkgs/development/python-modules/proton-client/0001-OpenSSL-path-fix.patch new file mode 100644 index 000000000000..7e97b2da5d3f --- /dev/null +++ b/pkgs/development/python-modules/proton-client/0001-OpenSSL-path-fix.patch @@ -0,0 +1,41 @@ +From 48da17d61e38657dfb10f2ac642fd3e6a45ee607 Mon Sep 17 00:00:00 2001 +From: "P. R. d. O" +Date: Wed, 27 Apr 2022 14:29:53 -0600 +Subject: [PATCH] OpenSSL path fix + +--- + proton/srp/_ctsrp.py | 12 ++---------- + 1 file changed, 2 insertions(+), 10 deletions(-) + +diff --git a/proton/srp/_ctsrp.py b/proton/srp/_ctsrp.py +index e19f184..af359c5 100644 +--- a/proton/srp/_ctsrp.py ++++ b/proton/srp/_ctsrp.py +@@ -24,22 +24,14 @@ from .util import PM_VERSION, SRP_LEN_BYTES, SALT_LEN_BYTES, hash_password + dlls = list() + + platform = sys.platform +-if platform == 'darwin': +- dlls.append(ctypes.cdll.LoadLibrary('libssl.dylib')) +-elif 'win' in platform: ++if 'win' in platform: + for d in ('libeay32.dll', 'libssl32.dll', 'ssleay32.dll'): + try: + dlls.append(ctypes.cdll.LoadLibrary(d)) + except Exception: + pass + else: +- try: +- dlls.append(ctypes.cdll.LoadLibrary('libssl.so.10')) +- except OSError: +- try: +- dlls.append(ctypes.cdll.LoadLibrary('libssl.so.1.0.0')) +- except OSError: +- dlls.append(ctypes.cdll.LoadLibrary('libssl.so')) ++ dlls.append(ctypes.cdll.LoadLibrary('@openssl@/lib/libssl@ext@')) + + + class BIGNUM_Struct(ctypes.Structure): +-- +2.35.1 + diff --git a/pkgs/development/python-modules/proton-client/default.nix b/pkgs/development/python-modules/proton-client/default.nix index 01ebed36c72b..ca68c8cb54cf 100644 --- a/pkgs/development/python-modules/proton-client/default.nix +++ b/pkgs/development/python-modules/proton-client/default.nix @@ -1,10 +1,13 @@ { lib +, stdenv , buildPythonPackage , fetchFromGitHub , pythonOlder +, substituteAll , bcrypt , pyopenssl , python-gnupg +, pytestCheckHook , requests , openssl }: @@ -30,14 +33,21 @@ buildPythonPackage rec { buildInputs = [ openssl ]; - # This patch is supposed to indicate where to load OpenSSL library, - # but it is not working as intended. - #patchPhase = '' - # substituteInPlace proton/srp/_ctsrp.py --replace \ - # "ctypes.cdll.LoadLibrary('libssl.so.10')" "'${lib.getLib openssl}/lib/libssl.so'" - #''; - # Regarding the issue above, I'm disabling tests for now - doCheck = false; + patches = [ + # Patches library by fixing the openssl path + (substituteAll { + src = ./0001-OpenSSL-path-fix.patch; + openssl = openssl.out; + ext = stdenv.hostPlatform.extensions.sharedLibrary; + }) + ]; + + checkInputs = [ pytestCheckHook ]; + + disabledTests = [ + #ValueError: Invalid modulus + "test_modulus_verification" + ]; pythonImportsCheck = [ "proton" ]; diff --git a/pkgs/development/python-modules/slixmpp/0001-xep_0030-allow-extra-args-in-get_info_from_domain.patch b/pkgs/development/python-modules/slixmpp/0001-xep_0030-allow-extra-args-in-get_info_from_domain.patch new file mode 100644 index 000000000000..3f73ab91e3a2 --- /dev/null +++ b/pkgs/development/python-modules/slixmpp/0001-xep_0030-allow-extra-args-in-get_info_from_domain.patch @@ -0,0 +1,36 @@ +From 7b5ac168892dedc5bd6be4244b18dc32d37d00fd Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?F=C3=A9lix=20Baylac-Jacqu=C3=A9?= +Date: Fri, 22 Apr 2022 15:26:05 +0200 +Subject: [PATCH] xep_0030: allow extra args in get_info_from_domain + +Aftermath of ea2d851a. + +http_upload from xep_0363 is now forwarding all its extra input args +to get_info_from_domain. Sadly for us, get_info_from_domain won't +accept any extra args passed that way and will yield a "got an +unexpected keyword argument". + +Modifying get_info_from_domain to accept these extra args. + +I hit this bug by passing a timeout_callback argument to http_upload. +Adding this scenario to the relevant integration test. +--- + itests/test_httpupload.py | 1 + + slixmpp/plugins/xep_0030/disco.py | 2 +- + 2 files changed, 2 insertions(+), 1 deletion(-) + +diff --git a/slixmpp/plugins/xep_0030/disco.py b/slixmpp/plugins/xep_0030/disco.py +index 37d453aa..9f9a45f2 100644 +--- a/slixmpp/plugins/xep_0030/disco.py ++++ b/slixmpp/plugins/xep_0030/disco.py +@@ -307,7 +307,7 @@ class XEP_0030(BasePlugin): + return self.api['has_identity'](jid, node, ifrom, data) + + async def get_info_from_domain(self, domain=None, timeout=None, +- cached=True, callback=None): ++ cached=True, callback=None, **iqkwargs): + """Fetch disco#info of specified domain and one disco#items level below + """ + +-- +2.35.1 diff --git a/pkgs/development/python-modules/slixmpp/default.nix b/pkgs/development/python-modules/slixmpp/default.nix index 375f910e5f84..30bdd8b31ff4 100644 --- a/pkgs/development/python-modules/slixmpp/default.nix +++ b/pkgs/development/python-modules/slixmpp/default.nix @@ -39,6 +39,8 @@ buildPythonPackage rec { src = ./hardcode-gnupg-path.patch; inherit gnupg; }) + # Upstream MR: https://lab.louiz.org/poezio/slixmpp/-/merge_requests/198 + ./0001-xep_0030-allow-extra-args-in-get_info_from_domain.patch ]; disabledTestPaths = [ diff --git a/pkgs/development/python-modules/zeroconf/default.nix b/pkgs/development/python-modules/zeroconf/default.nix index b81ac4f1fc5f..e9eba02ac90f 100644 --- a/pkgs/development/python-modules/zeroconf/default.nix +++ b/pkgs/development/python-modules/zeroconf/default.nix @@ -10,7 +10,7 @@ buildPythonPackage rec { pname = "zeroconf"; - version = "0.38.4"; + version = "0.38.5"; format = "setuptools"; disabled = pythonOlder "3.7"; @@ -19,7 +19,7 @@ buildPythonPackage rec { owner = "jstasiak"; repo = "python-zeroconf"; rev = version; - sha256 = "sha256-CLV1/maraSJ3GWnyN/0rLyEyWoQIL18rhm35llgvthw="; + hash = "sha256-QmmVxrvBPEwsmD/XJZClNQj4PUX+7X+75ZWSOO4/C24="; }; propagatedBuildInputs = [ diff --git a/pkgs/os-specific/linux/kernel/linux-testing-bcachefs.nix b/pkgs/os-specific/linux/kernel/linux-testing-bcachefs.nix index 83bd92f44f71..51f47cea2c45 100644 --- a/pkgs/os-specific/linux/kernel/linux-testing-bcachefs.nix +++ b/pkgs/os-specific/linux/kernel/linux-testing-bcachefs.nix @@ -1,9 +1,9 @@ { lib , fetchpatch , kernel -, date ? "2022-04-08" -, commit ? "6ddf061e68560a2bb263b126af7e894a6c1afb5f" -, diffHash ? "1nkrr1cxavw0rqxlyiz7pf9igvqay0d5kk7194v9ph3fcp9rz5kc" +, date ? "2022-04-25" +, commit ? "bdf6d7c1350497bc7b0be6027a51d9330645672d" +, diffHash ? "09bcbklvfj9i9czjdpix2iz7fvjksmavaljx8l92ay1i9fapjmhc" , kernelPatches # must always be defined in bcachefs' all-packages.nix entry because it's also a top-level attribute supplied by callPackage , argsOverride ? {} , ... @@ -17,7 +17,6 @@ extraMeta = { branch = "master"; maintainers = with lib.maintainers; [ davidak Madouura ]; - broken = true; }; } // argsOverride; diff --git a/pkgs/servers/moonraker/default.nix b/pkgs/servers/moonraker/default.nix index 2350cd18042a..c46568d24da1 100644 --- a/pkgs/servers/moonraker/default.nix +++ b/pkgs/servers/moonraker/default.nix @@ -20,13 +20,13 @@ let ]); in stdenvNoCC.mkDerivation rec { pname = "moonraker"; - version = "unstable-2022-03-10"; + version = "unstable-2022-04-23"; src = fetchFromGitHub { owner = "Arksine"; repo = "moonraker"; - rev = "ee312ee9c6597c8d077d7c3208ccea4e696c97ca"; - sha256 = "l0VOQIfKgZ/Je4z+SKhWMgYzxye8WKs9W1GkNs7kABo="; + rev = "cd520ba91728abb5a3d959269fbd8e4f40d1eb0b"; + sha256 = "sha256-sopX9t+LjYldx+syKwU3I0x/VYy4hLyXfitG0uumayE="; }; nativeBuildInputs = [ makeWrapper ]; diff --git a/pkgs/servers/xmpp/prosody/default.nix b/pkgs/servers/xmpp/prosody/default.nix index 6b70c4cc9874..607a9dc02016 100644 --- a/pkgs/servers/xmpp/prosody/default.nix +++ b/pkgs/servers/xmpp/prosody/default.nix @@ -1,4 +1,5 @@ { stdenv, fetchurl, lib, libidn, openssl, makeWrapper, fetchhg +, icu , lua , nixosTests , withLibevent ? true @@ -13,7 +14,7 @@ with lib; let luaEnv = lua.withPackages(p: with p; [ - luasocket luasec luaexpat luafilesystem luabitop luadbi-sqlite3 + luasocket luasec luaexpat luafilesystem luabitop luadbi-sqlite3 luaunbound ] ++ lib.optional withLibevent p.luaevent ++ lib.optional withDBI p.luadbi @@ -21,21 +22,19 @@ let ); in stdenv.mkDerivation rec { - version = "0.11.13"; # also update communityModules + version = "0.12.0"; # also update communityModules pname = "prosody"; # The following community modules are necessary for the nixos module # prosody module to comply with XEP-0423 and provide a working # default setup. nixosModuleDeps = [ - "bookmarks" "cloud_notify" "vcard_muc" - "smacks" "http_upload" ]; src = fetchurl { url = "https://prosody.im/downloads/source/${pname}-${version}.tar.gz"; - sha256 = "sha256-OcYbNGoJtRJbYEy5aeFCBsu8uGyBFW/8a6LWJSfPBDI="; + sha256 = "sha256-dS/zIBXaxWX8NBfCGWryaJccNY7gZuUfXZEkE1gNiJo="; }; # A note to all those merging automated updates: Please also update this @@ -43,13 +42,13 @@ stdenv.mkDerivation rec { # version. communityModules = fetchhg { url = "https://hg.prosody.im/prosody-modules"; - rev = "54fa2116bbf3"; - sha256 = "sha256-OKZ7tD75q8/GMXruUQ+r9l0BxzdbPHNf41fZ3fHVQVw="; + rev = "65438e4ba563"; + sha256 = "sha256-zHOrMzcgHOdBl7nObM+OauifbcmKEOfAuj81MDSoLMk="; }; nativeBuildInputs = [ makeWrapper ]; buildInputs = [ - luaEnv libidn openssl + luaEnv libidn openssl icu ] ++ withExtraLibs; @@ -63,26 +62,14 @@ stdenv.mkDerivation rec { make -C tools/migration ''; - luaEnvPath = lua.pkgs.lib.genLuaPathAbsStr luaEnv; - luaEnvCPath = lua.pkgs.lib.genLuaCPathAbsStr luaEnv; - # the wrapping should go away once lua hook is fixed postInstall = '' ${concatMapStringsSep "\n" (module: '' cp -r $communityModules/mod_${module} $out/lib/prosody/modules/ '') (lib.lists.unique(nixosModuleDeps ++ withCommunityModules ++ withOnlyInstalledCommunityModules))} - wrapProgram $out/bin/prosody \ - --prefix LUA_PATH ';' "$luaEnvPath" \ - --prefix LUA_CPATH ';' "$luaEnvCPath" wrapProgram $out/bin/prosodyctl \ - --add-flags '--config "/etc/prosody/prosody.cfg.lua"' \ - --prefix LUA_PATH ';' "$luaEnvPath" \ - --prefix LUA_CPATH ';' "$luaEnvCPath" - + --add-flags '--config "/etc/prosody/prosody.cfg.lua"' make -C tools/migration install - wrapProgram $out/bin/prosody-migrator \ - --prefix LUA_PATH ';' "$luaEnvPath" \ - --prefix LUA_CPATH ';' "$luaEnvCPath" ''; passthru = { @@ -95,6 +82,6 @@ stdenv.mkDerivation rec { license = licenses.mit; homepage = "https://prosody.im"; platforms = platforms.linux; - maintainers = with maintainers; [ fpletz globin ninjatrappeur ]; + maintainers = with maintainers; [ fpletz globin ]; }; } diff --git a/pkgs/tools/filesystems/bcachefs-tools/default.nix b/pkgs/tools/filesystems/bcachefs-tools/default.nix index b94f1d83394b..bd8bf1adb9fc 100644 --- a/pkgs/tools/filesystems/bcachefs-tools/default.nix +++ b/pkgs/tools/filesystems/bcachefs-tools/default.nix @@ -22,13 +22,13 @@ stdenv.mkDerivation { pname = "bcachefs-tools"; - version = "unstable-2022-04-08"; + version = "unstable-2022-05-02"; src = fetchFromGitHub { owner = "koverstreet"; repo = "bcachefs-tools"; - rev = "986533d8d5b21c8eb512bbb3f0496d3d2a087c5d"; - sha256 = "1qvb5l937nnls5j82ipgrdh6q5fk923z752rzzqqcms6fz7rrjs4"; + rev = "6f5afc0c12bbf56ffdabe5b2c5297aef255c4baa"; + sha256 = "0483zhm3gmk6fd1pn815i3fixwlwsnks3817gn7n3idbbw0kg5ng"; }; postPatch = '' diff --git a/pkgs/tools/inputmethods/fcitx5/fcitx5-rime-with-nix-env-variable.patch b/pkgs/tools/inputmethods/fcitx5/fcitx5-rime-with-nix-env-variable.patch new file mode 100644 index 000000000000..428a0232dc3b --- /dev/null +++ b/pkgs/tools/inputmethods/fcitx5/fcitx5-rime-with-nix-env-variable.patch @@ -0,0 +1,18 @@ +:100644 100644 fac4f53 aed9617 M src/rimeengine.cpp + +diff --git a/src/rimeengine.cpp b/src/rimeengine.cpp +index fac4f53..aed9617 100644 +--- a/src/rimeengine.cpp ++++ b/src/rimeengine.cpp +@@ -164,7 +164,10 @@ void RimeEngine::rimeStart(bool fullcheck) { + RIME_ERROR() << "Failed to create user directory: " << userDir; + } + } +- const char *sharedDataDir = RIME_DATA_DIR; ++ const char *sharedDataDir = getenv("NIX_RIME_DATA_DIR"); ++ if (!sharedDataDir) { ++ sharedDataDir = RIME_DATA_DIR; ++ } + + RIME_STRUCT(RimeTraits, fcitx_rime_traits); + fcitx_rime_traits.shared_data_dir = sharedDataDir; diff --git a/pkgs/tools/inputmethods/fcitx5/fcitx5-rime.nix b/pkgs/tools/inputmethods/fcitx5/fcitx5-rime.nix index 3743d6cb9fc8..fac81c8dea12 100644 --- a/pkgs/tools/inputmethods/fcitx5/fcitx5-rime.nix +++ b/pkgs/tools/inputmethods/fcitx5/fcitx5-rime.nix @@ -35,6 +35,8 @@ stdenv.mkDerivation rec { librime ]; + patches = [ ./fcitx5-rime-with-nix-env-variable.patch ]; + meta = with lib; { description = "RIME support for Fcitx5"; homepage = "https://github.com/fcitx/fcitx5-rime"; diff --git a/pkgs/tools/misc/zellij/default.nix b/pkgs/tools/misc/zellij/default.nix index b12d096435bd..421c7d21d305 100644 --- a/pkgs/tools/misc/zellij/default.nix +++ b/pkgs/tools/misc/zellij/default.nix @@ -15,16 +15,16 @@ rustPlatform.buildRustPackage rec { pname = "zellij"; - version = "0.27.0"; + version = "0.29.1"; src = fetchFromGitHub { owner = "zellij-org"; repo = "zellij"; rev = "v${version}"; - sha256 = "sha256-iQ+Z1A/wiui2IHuK35e6T/44TYaf6+KbaDl5GfVF2vo="; + sha256 = "sha256-KuelmMQdCazwTlolH5xvvNXZfzHQDUV6rrlk037GFb8="; }; - cargoSha256 = "sha256-DMHIvqClBpBplvqqXM2dUOumO+Ean4yAHWDplJ9PaUM="; + cargoSha256 = "sha256-He8rMY8n15ZSF/GcbuYTx2JfZgqQnsZLfqP+lUYxnzw="; nativeBuildInputs = [ mandown diff --git a/pkgs/tools/text/bashblog/0001-Setting-markdown_bin.patch b/pkgs/tools/text/bashblog/0001-Setting-markdown_bin.patch new file mode 100644 index 000000000000..7e6c78dd9dcb --- /dev/null +++ b/pkgs/tools/text/bashblog/0001-Setting-markdown_bin.patch @@ -0,0 +1,25 @@ +From 1990ac93c9dbf3ada0eb2f045ef1aa95bbef7018 Mon Sep 17 00:00:00 2001 +From: "P. R. d. O" +Date: Thu, 21 Apr 2022 07:40:30 -0600 +Subject: [PATCH] Setting markdown_bin + +--- + bb.sh | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/bb.sh b/bb.sh +index 9d8e645..40fb54d 100755 +--- a/bb.sh ++++ b/bb.sh +@@ -160,7 +160,7 @@ global_variables() { + + # Markdown location. Trying to autodetect by default. + # The invocation must support the signature 'markdown_bin in.md > out.html' +- [[ -f Markdown.pl ]] && markdown_bin=./Markdown.pl || markdown_bin=$(which Markdown.pl 2>/dev/null || which markdown 2>/dev/null) ++ markdown_bin=@markdown_path@ + } + + # Check for the validity of some variables +-- +2.35.1 + diff --git a/pkgs/tools/text/bashblog/default.nix b/pkgs/tools/text/bashblog/default.nix new file mode 100644 index 000000000000..2649b5640441 --- /dev/null +++ b/pkgs/tools/text/bashblog/default.nix @@ -0,0 +1,59 @@ +{ stdenv +, lib +, fetchzip +, fetchFromGitHub +, makeWrapper +, substituteAll +, perlPackages +# Flags to enable processors +# Currently, Markdown.pl does not work +, usePandoc ? true +, pandoc }: + +let + inherit (perlPackages) TextMarkdown; + # As bashblog supports various markdown processors + # we can set flags to enable a certain processor + markdownpl_path = "${perlPackages.TextMarkdown}/bin/Markdown.pl"; + pandoc_path = "${pandoc}/bin/pandoc"; + +in stdenv.mkDerivation rec { + pname = "bashblog"; + version = "unstable-2022-03-26"; + + src = fetchFromGitHub { + owner = "cfenollosa"; + repo = "bashblog"; + rev = "c3d4cc1d905560ecfefce911c319469f7a7ff8a8"; + sha256 = "sha256-THlP/JuaZzDq9QctidwLRiUVFxRhGNhRKleWbQiqsgg="; + }; + + nativeBuildInputs = [ makeWrapper ]; + + buildInputs = [ TextMarkdown ] + ++ lib.optionals usePandoc [ pandoc ]; + + patches = [ + (substituteAll { + src = ./0001-Setting-markdown_bin.patch; + markdown_path = if usePandoc then pandoc_path else markdownpl_path; + }) + ]; + + postPatch = '' + patchShebangs bb.sh + ''; + + installPhase = '' + mkdir -p $out/bin + install -Dm755 bb.sh $out/bin/bashblog + ''; + + meta = with lib; { + description = "A single Bash script to create blogs"; + homepage = "https://github.com/cfenollosa/bashblog"; + license = licenses.gpl3Only; + platforms = platforms.unix; + maintainers = with maintainers; [ wolfangaukang ]; + }; +} diff --git a/pkgs/top-level/all-packages.nix b/pkgs/top-level/all-packages.nix index 1b754a6a5905..8a5abce5cfbe 100644 --- a/pkgs/top-level/all-packages.nix +++ b/pkgs/top-level/all-packages.nix @@ -1874,6 +1874,8 @@ with pkgs; awless = callPackage ../tools/virtualization/awless { }; + bashblog = callPackage ../tools/text/bashblog { }; + berglas = callPackage ../tools/admin/berglas { }; betterdiscordctl = callPackage ../tools/misc/betterdiscordctl { }; diff --git a/pkgs/top-level/coq-packages.nix b/pkgs/top-level/coq-packages.nix index c71ec2acf944..6af05d761c43 100644 --- a/pkgs/top-level/coq-packages.nix +++ b/pkgs/top-level/coq-packages.nix @@ -77,6 +77,7 @@ let mathcomp-word = callPackage ../development/coq-modules/mathcomp-word {}; mathcomp-zify = callPackage ../development/coq-modules/mathcomp-zify {}; mathcomp-tarjan = callPackage ../development/coq-modules/mathcomp-tarjan {}; + metacoq = callPackage ../development/coq-modules/metacoq { }; metalib = callPackage ../development/coq-modules/metalib { }; multinomials = callPackage ../development/coq-modules/multinomials {}; odd-order = callPackage ../development/coq-modules/odd-order { };