Otherwise it will try to guess the log directory, and the guess might
not be the same if chroot builds are enabled or not.
The gruesome details from m4/sudo.m4:
````
dnl
dnl Where the I/O log files go, use /var/log/sudo-io if
dnl /var/log exists, else /{var,usr}/adm/sudo-io
dnl
AC_DEFUN([SUDO_IO_LOGDIR], [
AC_MSG_CHECKING(for I/O log dir location)
if test "${with_iologdir-yes}" != "yes"; then
iolog_dir="$with_iologdir"
elif test -d "/var/log"; then
iolog_dir="/var/log/sudo-io"
elif test -d "/var/adm"; then
iolog_dir="/var/adm/sudo-io"
else
iolog_dir="/usr/adm/sudo-io"
fi
if test "${with_iologdir}" != "no"; then
SUDO_DEFINE_UNQUOTED(_PATH_SUDO_IO_LOGDIR, "$iolog_dir")
fi
AC_MSG_RESULT($iolog_dir)
])dnl
````
Running zpaq on an older but not ancient 64-bit Intel server aborts
with an ‘Illegal instruction’ error. Turns out the build expression
was using -march=native to generate distibution binaries...
Change this to more conservative, portable settings which should
cover ‘all’ CPUs. It may run slightly slower — but that at least
implies running.
As a nice side effect, all common compile flags are now back in
`compileFlags` whence they came, and actually used consistently.
http://hydra.nixos.org/eval/1234895
The mass errors on Hydra seem transient; I verified ghc on i686-linux.
Only darwin jobs are queued ATM. There's a libpng security update
included in this merge, so I don't want to wait too long.
This improves our Bundler integration (i.e. `bundlerEnv`).
Before describing the implementation differences, I'd like to point a
breaking change: buildRubyGem now expects `gemName` and `version` as
arguments, rather than a `name` attribute in the form of
"<gem-name>-<version>".
Now for the differences in implementation.
The previous implementation installed all gems at once in a single
derivation. This was made possible by using a set of monkey-patches to
prevent Bundler from downloading gems impurely, and to help Bundler
find and activate all required gems prior to installation. This had
several downsides:
* The patches were really hard to understand, and required subtle
interaction with the rest of the build environment.
* A single install failure would cause the entire derivation to fail.
The new implementation takes a different approach: we install gems into
separate derivations, and then present Bundler with a symlink forest
thereof. This has a couple benefits over the existing approach:
* Fewer patches are required, with less interplay with the rest of the
build environment.
* Changes to one gem no longer cause a rebuild of the entire dependency
graph.
* Builds take 20% less time (using gitlab as a reference).
It's unfortunate that we still have to muck with Bundler's internals,
though it's unavoidable with the way that Bundler is currently designed.
There are a number improvements that could be made in Bundler that would
simplify our packaging story:
* Bundler requires all installed gems reside within the same prefix
(GEM_HOME), unlike RubyGems which allows for multiple prefixes to
be specified through GEM_PATH. It would be ideal if Bundler allowed
for packages to be installed and sourced from multiple prefixes.
* Bundler installs git sources very differently from how RubyGems
installs gem packages, and, unlike RubyGems, it doesn't provide a
public interface (CLI or programmatic) to guide the installation of a
single gem. We are presented with the options of either
reimplementing a considerable portion Bundler, or patch and use parts
of its internals; I choose the latter. Ideally, there would be a way
to install gems from git sources in a manner similar to how we drive
`gem` to install gem packages.
* When a bundled program is executed (via `bundle exec` or a
binstub that does `require 'bundler/setup'`), the setup process reads
the Gemfile.lock, activates the dependencies, re-serializes the lock
file it read earlier, and then attempts to overwrite the Gemfile.lock
if the contents aren't bit-identical. I think the reasoning is that
by merely running an application with a newer version of Bundler, you'll
automatically keep the Gemfile.lock up-to-date with any changes in the
format. Unfortunately, that doesn't play well with any form of
packaging, because bundler will immediately cause the application to
abort when it attempts to write to the read-only Gemfile.lock in the
store. We work around this by normalizing the Gemfile.lock with the
version of Bundler that we'll use at runtime before we copy it into
the store. This feels fragile, but it's the best we can do without
changes upstream, or resorting to more delicate hacks.
With all of the challenges in using Bundler, one might wonder why we
can't just cut Bundler out of the picture and use RubyGems. After all,
Nix provides most of the isolation that Bundler is used for anyway.
The problem, however, is that almost every Rails application calls
`Bundler::require` at startup (by way of the default project templates).
Because bundler will then, by default, `require` each gem listed in the
Gemfile, Rails applications are almost always written such that none of
the source files explicitly require their dependencies. That leaves us
with two options: support and use Bundler, or maintain massive patches
for every Rails application that we package.
Closes#8612
The configure script tries to probe whether /var/run exists when
determining the location for the pid file, which is not very nice when
doing chroot builds. Just set it explicitly to avoid the problem.
For reference, the culprit in configure.ac:
````
piddir=/var/run
if test ! -d $piddir ; then
piddir=`eval echo ${sysconfdir}`
case $piddir in
NONE/*) piddir=`echo $piddir | sed "s~NONE~$ac_default_prefix~"` ;;
esac
fi
AC_ARG_WITH([pid-dir],
[ --with-pid-dir=PATH Specify location of ssh.pid file],
...
````
Also, use the `install-nokeys` target in installPhase so we avoid
installing useless host keys into $out/etc/ssh and improve built purity
as well.
Relevant changes:
- Python version switched to Python 3
- ssdeep library got replaced with tlsh
- the 'magic' Python package got replaced with a different one
- Minor build system improvements == less work for us
Currently, building RPM with `python = python3` causes this:
checking for a Python interpreter with version >= 2.6... python3
checking for python3... /nix/store/dykqxnrwiz9drlcv2wy8lpvl3xvklx0g-python3-3.4.3/bin/python3
checking for python3 version... 3.4
checking for Python.h... yes
checking for library containing Py_Main... no
configure: error: missing python library
That comes from this snippet in configure.ac:
AC_SEARCH_LIBS([Py_Main],[python${PYTHON_VERSION} python],[
WITH_PYTHON_LIB="$ac_res"
],[AC_MSG_ERROR([missing python library])
])
So it's looking for (e.g) `libpython3.4.so` wheras we have `libpython3.4m.so`.
Patching the configure script to match seems to make that work (although
I don't really understand what the heck is this 'm' business about).
Removed path substitutions from setup.py because these should be handled
by the setuptools install prefix.
Except that the install prefix won't quite work until issue #4968 is
resolved.
In the meantime there are preInstall and postInstall scripts so that
this package continues to work with the nix python packaging
improvements.
It's not included in upstream beets but are linked in the documentation
under "Other plugins", see:
http://beets.readthedocs.org/en/v1.3.15/plugins/index.html#other-plugins
I found this one particularly useful for syncing files to varios media
players that refuse to read my FLAC files properly.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
After trying with a dozen files, it seems the bs1770gain backend is much
more reliable than the audiotools backend and especially does a better
job (well, compared to audiotools which either does doing nothing at all
or throws an exception) when used on alboms that contain different
sample rates/sizes.
Additionally, we already had a few issues regarding the audiotools
backend, even to the extent that @sampsyco almost wanted to drop it
upstream (see sampsyco/beets#1342).
Also related issues are #10376 and sampsyo/beets#1592.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
When a new version of colordiff is released the old tarball is moved to
the archive directory. This breaks builds until the derivation is
updated to the new version. This commit lets fetchurl know about the
archive URL.