Aggregation of development blogs from the GNU Project
A draft of a proposed GNU extension to the Algol 68 programming language has been published today at https://algol68-lang.org/docs/GNU68-2025-004-supper.pdf.
This new stropping regime aims to be more appealing to contemporary programmers, and also more convenient to be used in today's computing systems, while at the same time retaining the full expressive power of a stropped language and being 100% backwards compatible as a super-extension.
The stropping regime has been already implemented in the https://gcc.gnu.org/wiki/Algol68FrontEndGCC Algol 68 front-end and also in the Emacs a68-mode that provides full automatic indentation and syntax highlighting.
The sources of the godcc program have been already transitioned to the new regime, and the result is quite satisfactory. Check it out!
Comments and suggestions for the draft are very welcome, and would help to move the draft forward to a final state. Please send them to algol68@gcc.gnu.org.
Salud, and happy Easter everyone!
Download from https://ftp.gnu.org/gnu/gperf/gperf-3.3.tar.gz
New in this release:
20 April, 2025 12:43PM by Bruno Haible
19 April 2025 Unifont 16.0.03 is now available. This is a minor release with many glyph improvements. See the ChangeLog file for details.
Download this release from GNU server mirrors at:
https://ftpmirror.gnu.org/unifont/unifont-16.0.03/
or if that fails,
https://ftp.gnu.org/gnu/unifont/unifont-16.0.03/
or, as a last resort,
ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.03/
These files are also available on the unifoundry.com website:
https://unifoundry.com/pub/unifont/unifont-16.0.03/
Font files are in the subdirectory
https://unifoundry.com/pub/unifont/unifont-16.0.03/font-builds/
A more detailed description of font changes is available at
https://unifoundry.com/unifont/index.html
and of utility program changes at
https://unifoundry.com/unifont/unifont-utilities.html
Information about Hangul modifications is at
https://unifoundry.com/hangul/index.html
and
http://unifoundry.com/hangul/hangul-generation.html
Enjoy!
19 April, 2025 04:08PM by Paul Hardy
Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.
The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.
Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:
While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.
So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.
How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.
We can do better.
I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:
https://gitlab.com/debdistutils/verify-reproducible-releases
The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.
I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty
. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)
I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.
Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.
Happy Supply-Chain Security Hacking!
17 April, 2025 07:24PM by simon
Greetings! While these tiny issues will likely not affect many if any,
there are alas a few tiny errata with the 2.7.1 tarball release. Posted
here just for those interested. Will of course be incorporated in the
next release.
modified gcl/debian/rules
@@ -138,7 +138,7 @@ clean: debian/control debian/gcl.templates
rm -rf $(INS) debian/substvars debian.upstream
rm -rf *stamp build-indep
rm -f debian/elpa-gcl$(EXT).elpa debian/gcl$(EXT)-pkg.el
- rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info*
+ rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* gcl_pool
debian-clean: debian/control debian/gcl.templates
dh_testdir
modified gcl/git.tag
@@ -1,2 +1,2 @@
-"Version_2_7_0"
+"Version_2_7_1"
modified gcl/o/alloc.c
@@ -707,6 +707,7 @@ empty_relblock(void) {
for (;!rb_emptyp();) {
tm_table[t_relocatable].tm_adjgbccnt--;
expand_contblock_index_space();
+ expand_contblock_array();
GBC(t_relocatable);
}
sSAleaf_collection_thresholdA->s.s_dbind=o;
11 April, 2025 10:06PM by Camm Maguire
Greetings!
Greetings! The GCL team is happy to announce the release of version
2.7.1, the culmination of many years of work and a major development
in the evolution of GCL. Please see http://www.gnu.org/software/gcl for
downloading information.
11 April, 2025 02:31PM by Camm Maguire
Don’t do this:
thing = Thing() try: thing.do_stuff() finally: thing.close()
Do do this:
from contextlib import closing with closing(Thing()) as thing: thing.do_stuff()
Why is the second better? Using contextlib.closing()
ties closing the item to its creation. These baby examples are about equally easy to reason about, with only a single line in the try
block, but consider what happens ifwhen more lines get added in future? In the first example, the close moves away, potentially offscreen, but that doesn’t happen in the second.
11 April, 2025 10:27AM by gbenson
This is a bugfix release for gnunet 0.24.0. It fixes some regressions and minor bugs.
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
This is to announce grep-3.12, a stable release.
It's been nearly two years! There have been two bug fixes and many
harder-to-see improvements via gnulib. Thanks to Paul Eggert for doing
so much of the work and Bruno Haible for all the testing and all he does
to make gnulib a paragon of portable, reliable, top-notch code.
There have been 77 commits by 6 people in the 100 weeks since 3.11.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (5)
Carlo Marcelo Arenas Belón (1)
Collin Funk (1)
Grisha Levit (1)
Jim Meyering (31)
Paul Eggert (38)
Jim
[on behalf of the grep maintainers]
==================================================================
Here is the GNU grep home page:
https://gnu.org/s/grep/
Here are the compressed sources:
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz (3.1MB)
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz (1.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz.sig
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
025644ca3ea4f59180d531547c53baeb789c6047 grep-3.12.tar.gz
ut2lRt/Eudl+mS4sNfO1x/IFIv/L4vAboenNy+dkTNw= grep-3.12.tar.gz
4b4df79f5963041d515ef64cfa245e0193a33009 grep-3.12.tar.xz
JkmyfA6Q5jLq3NdXvgbG6aT0jZQd5R58D4P/dkCKB7k= grep-3.12.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify grep-3.12.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify grep-3.12.tar.gz.sig
This release is based on the grep git repository, available as
git clone https://git.savannah.gnu.org/git/grep.git
with commit 3f8c09ec197a2ced82855f9ecd2cbc83874379ab tagged as v3.12.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.12
or run this command from a git-cloned grep directory:
git shortlog v3.11..v3.12
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79
NEWS
* Noteworthy changes in release 3.12 (2025-04-10) [stable]
** Bug fixes
Searching a directory with at least 100,000 entries no longer fails
with "Operation not supported" and exit status 2. Now, this prints 1
and no diagnostic, as expected:
$ mkdir t && cd t && seq 100000|xargs touch && grep -r x .; echo $?
1
[bug introduced in grep 3.11]
-mN where 1 < N no longer mistakenly lseeks to end of input merely
because standard output is /dev/null.
** Changes in behavior
The --unix-byte-offsets (-u) option is gone. In grep-3.7 (2021-08-14)
it became a warning-only no-op. Before then, it was a Windows-only no-op.
On Windows platforms and on AIX in 32-bit mode, grep in some cases
now supports Unicode characters outside the Basic Multilingual Plane.
10 April, 2025 05:04PM by Jim Meyering
This is to announce gzip-1.14, a stable release.
Most notable: "gzip -d" is up to 40% faster on x86_64 CPUs with pclmul
support. Why? Because about half of its time was spent computing a CRC
checksum, and that code is far more efficient now. Even on 10-year-old
CPUs lacking pclmul support, it's ~20% faster. Thanks to Lasse Collin
for alerting me to this very early on, to Sam Russell for contributing
gnulib's new crc module and to Bruno Haible and everyone else who keeps
the bar so high for all of gnulib. And as usual, thanks to Paul Eggert
for many contributions everywhere.
There have been 58 commits by 7 people in the 85 weeks since 1.13.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (1)
Collin Funk (4)
Jim Meyering (26)
Lasse Collin (1)
Paul Eggert (24)
Sam Russell (1)
Simon Josefsson (1)
Jim
[on behalf of the gzip maintainers]
==================================================================
Here is the GNU gzip home page:
https://gnu.org/s/gzip/
Here are the compressed sources:
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz (1.4MB)
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz (868KB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz.sig
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
27f9847892a1c59b9527469a8a3e5d635057fbdd gzip-1.14.tar.gz
YT1upE8SSNc3DHzN7uDdABegnmw53olLPG8D+YEZHGs= gzip-1.14.tar.gz
05f44a8a589df0171e75769e3d11f8b11d692f58 gzip-1.14.tar.xz
Aae4gb0iC/32Ffl7hxj4C9/T9q3ThbmT3Pbv0U6MCsY= gzip-1.14.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify gzip-1.14.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify gzip-1.14.tar.gz.sig
This release is based on the gzip git repository, available as
git clone https://git.savannah.gnu.org/git/gzip.git
with commit fbc4883eb9c304a04623ac506dd5cf5450d055f1 tagged as v1.14.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.14
or run this command from a git-cloned gzip directory:
git shortlog v1.13..v1.14
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-01-31 553ab924d2b68d930fae5d3c6396502a57852d23
NEWS
* Noteworthy changes in release 1.14 (2025-04-09) [stable]
** Bug fixes
'gzip -d' no longer omits the last partial output buffer when the
input ends unexpectedly on an IBM Z platform.
[bug introduced in gzip-1.11]
'gzip -l' no longer misreports lengths of multimember inputs.
[bug introduced in gzip-1.12]
'gzip -S' now rejects suffixes containing '/'.
[bug present since the beginning]
** Changes in behavior
The GZIP environment variable is now silently ignored except for the
options -1 (--fast) through -9 (--best), --rsyncable, and --synchronous.
This brings gzip into line with more-cautious compressors like zstd
that limit environment variables' effect to relatively innocuous
performance issues. You can continue to use scripts to specify
whatever gzip options you like.
'zmore' is no longer installed on platforms lacking 'more'.
** Performance improvements
gzip now decompresses significantly faster by computing CRCs via a
slice by 8 algorithm, and faster yet on x86-64 platforms that
support pclmul instructions.
10 April, 2025 04:34AM by Jim Meyering
This is to announce coreutils-9.7, a stable release.
There have been 63 commits by 11 people in the 12 weeks since 9.6,
with a focus on bug fixing and stabilization.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (1) Jim Meyering (2)
Collin Funk (2) Lukáš Zaoral (1)
Daniel Hofstetter (1) Mike Swanson (1)
Frédéric Yhuel (1) Paul Eggert (21)
G. Branden Robinson (1) Pádraig Brady (32)
Grisha Levit (1)
Pádraig [on behalf of the coreutils maintainers]
==================================================================
Here is the GNU coreutils home page:
https://gnu.org/s/coreutils/
Here are the compressed sources:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz (15MB)
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz (5.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz.sig
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
File: coreutils-9.7.tar.gz
SHA1 sum: bfebebaa1aa59fdfa6e810ac07d85718a727dcf6
SHA256 sum: 0898a90191c828e337d5e4e4feb71f8ebb75aacac32c434daf5424cda16acb42
File: coreutils-9.7.tar.xz
SHA1 sum: 920791e12e7471479565a066e116a087edcc0df9
SHA256 sum: e8bb26ad0293f9b5a1fc43fb42ba970e312c66ce92c1b0b16713d7500db251bf
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify coreutils-9.7.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]
Key fingerprint = 6C37 DC12 121A 5006 BC1D B804 DF6F D971 3060 37D9
uid [ultimate] Pádraig Brady <P@draigBrady.com>
uid [ultimate] Pádraig Brady <pixelbeat@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key P@draigBrady.com
gpg --recv-keys DF6FD971306037D9
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify coreutils-9.7.tar.gz.sig
This release is based on the coreutils git repository, available as
git clone https://git.savannah.gnu.org/git/coreutils.git
with commit 8e075ff8ee11692c5504d8e82a48ed47a7f07ba9 tagged as v9.7.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.7
or run this command from a git-cloned coreutils directory:
git shortlog v9.6..v9.7
This release was bootstrapped with the following tools:
Autoconf 2.72.70-9ff9
Automake 1.16.5
Gnulib 2025-04-07 41e7b7e0d159d8ac0eb385964119f350ac9dfc3f
Bison 3.8.2
NEWS
* Noteworthy changes in release 9.7 (2025-04-09) [stable]
** Bug fixes
'cat' would fail with "input file is output file" if input and
output are the same terminal device and the output is append-only.
[bug introduced in coreutils-9.6]
'cksum -a crc' misbehaved on aarch64 with 32-bit uint_fast32_t.
[bug introduced in coreutils-9.6]
dd with the 'nocache' flag will now detect all failures to drop the
cache for the whole file. Previously it may have erroneously succeeded.
[bug introduced with the "nocache" feature in coreutils-8.11]
'ls -Z dir' would crash on all systems, and 'ls -l' could crash
on systems like Android with SELinux but without xattr support.
[bug introduced in coreutils-9.6]
`ls -l` could output spurious "Not supported" errors in certain cases,
like with dangling symlinks on cygwin.
[bug introduced in coreutils-9.6]
timeout would fail to timeout commands with infinitesimal timeouts.
For example `timeout 1e-5000 sleep inf` would never timeout.
[bug introduced with timeout in coreutils-7.0]
sleep, tail, and timeout would sometimes sleep for slightly less
time than requested.
[bug introduced in coreutils-5.0]
'who -m' now outputs entries for remote logins. Previously login
entries prefixed with the service (like "sshd") were not matched.
[bug introduced in coreutils-9.4]
** Improvements
'logname' correctly returns the user who logged in the session,
on more systems. Previously on musl or uclibc it would have merely
output the LOGNAME environment variable.
09 April, 2025 11:36AM by Pádraig Brady
This is to announce diffutils-3.12, a stable bug-fix release.
Thanks to Paul Eggert and Collin Funk for the bug fixes.
There have been 13 commits by 4 people in the 9 weeks since 3.11.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Collin Funk (1)
Jim Meyering (6)
Paul Eggert (5)
Simon Josefsson (1)
Jim
[on behalf of the diffutils maintainers]
==================================================================
Here is the GNU diffutils home page:
https://gnu.org/s/diffutils/
Here are the compressed sources:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz (3.3MB)
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz (1.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz.sig
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
e3f3e8ef171fcb54911d1493ac6066aa3ed9df38 diffutils-3.12.tar.gz
W+GBsn7Diq0kUAgGYaZOShdSuym31QUr8KAqcPYj+bI= diffutils-3.12.tar.gz
c2f302726d2709c6881c4657430a671abe5eedfa diffutils-3.12.tar.xz
fIt/n8hgkUH96pzs6FJJ0whiQ5H/Yd7a9Sj8szdyff0= diffutils-3.12.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify diffutils-3.12.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=diffutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify diffutils-3.12.tar.gz.sig
This release is based on the diffutils git repository, available as
git clone https://git.savannah.gnu.org/git/diffutils.git
with commit 16681a3cbcea47e82683c713b0dac7d59d85a6fa tagged as v3.12.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.12
or run this command from a git-cloned diffutils directory:
git shortlog v3.11..v3.12
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79
NEWS
* Noteworthy changes in release 3.12 (2025-04-08) [stable]
** Bug fixes
diff -r no longer merely summarizes when comparing an empty regular
file to a nonempty regular file.
[bug#76452 introduced in 3.11]
diff -y no longer crashes when given nontrivial differences.
[bug#76613 introduced in 3.11]
09 April, 2025 03:16AM by Jim Meyering
The initial injustice of proprietary software often leads to further injustices: malicious functionalities.
The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.
Microsoft's Software is Malware
Apple's Operating Systems Are Malware
08 April, 2025 05:31PM by Rob Musial
Download from https://ftp.gnu.org/gnu/gperf/gperf-3.2.tar.gz
New in this release:
07 April, 2025 10:50AM by Bruno Haible
This is to announce datamash-1.9, a stable release.
Home page: https://www.gnu.org/software/datamash
GNU Datamash is a command-line program which performs basic numeric,
textual and statistical operations on input textual data files.
It is designed to be portable and reliable, and aid researchers
to easily automate analysis pipelines, without writing code or even
short scripts. It is very friendly to GNU Bash and GNU Make pipelines.
There have been 52 commits by 5 people in the 141 weeks since 1.8.
See the NEWS below for a brief summary.
The following people contributed changes to this release:
Dima Kogan (1)
Erik Auerswald (14)
Georg Sauthoff (4)
Shawn Wagner (6)
Timothy Rice (27)
Thanks to everyone who has contributed!
Please report any problem you may experience to the bug-datamash@gnu.org
mailing list.
Happy Hacking!
- Tim
==================================================================
Here is the GNU datamash home page:
https://gnu.org/s/datamash/
Here are the compressed sources and a GPG detached signature:
https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz
https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
File: datamash-1.9.tar.gz
SHA1 sum: 935c9f24a925ce34927189ef9f86798a6303ec78
SHA256 sum: f382ebda03650dd679161f758f9c0a6cc9293213438d4a77a8eda325aacb87d2
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify datamash-1.9.tar.gz.sig
The signature should match the fingerprint of the following key:
pub ed25519 2022-04-05 [SC]
3338 2C8D 6201 7A10 12A0 5B35 BDB7 2EC3 D3F8 7EE6
uid Timothy Rice (Yubikey 5 Nano 13139911) <trice@posteo.net>
If that command fails because you don't have the required public key,
or that public key has expired, try the following command to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify datamash-1.9.tar.gz.sig
This release is based on the datamash git repository, available as
git clone https://git.savannah.gnu.org/git/datamash.git
with commit 39101c367a07f2c1aea8f3b540fc490735596e6a tagged as v1.9.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=datamash.git;a=shortlog;h=v1.9
or run this command from a git-cloned datamash directory:
git shortlog v1.8..v1.9
This release was bootstrapped with the following tools:
Autoconf 2.72
Automake 1.17
Gnulib 2025-03-27 54fc57c23dcd833819a7adbdfcc3bd1c805103a8
NEWS
** Changes in Behavior
datamash(1), decorate(1): Add short options -h and -V for --help and --version
respectively.
datamash(1): the rand operation now uses getrandom(2) for generating a random
seed, instead of relying on date/time/pid mixing.
** New Features
datamash(1): add operation dotprod for calculating the scalar product of two
columns.
datamash(1): Add option -S/--seed to set a specific seed for pseudo-random
number generation.
datamash(1): Add option --vnlog to enable experimental support for the vnlog
format. More about vnlog is at https://github.com/dkogan/vnlog.
datamash(1): -g/groupby takes ranges of columns (e.g. 1-4)
** Bug Fixes
datamash(1) now correctly calculates the "antimode" for a sequence
of numbers. Problem reported by Kingsley G. Morse Jr. in
<https://lists.gnu.org/archive/html/bug-datamash/2023-12/msg00003.html>.
When using the locale's decimal separator as field separator, numeric
datamash(1) operations now work correctly. Problem reported by Jérémie
Roquet in
<https://lists.gnu.org/archive/html/bug-datamash/2018-09/msg00000.html>
and by Jeroen Hoek in
<https://lists.gnu.org/archive/html/bug-datamash/2023-11/msg00000.html>.
datamash(1): The "getnum" operation now stays inside the specified field.
04 April, 2025 08:52PM by Tim Rice
I am pleased to announce the release of GNU patch 2.8.
The project page is at https://savannah.gnu.org/projects/patch
The sources can be downloaded from http://ftpmirror.gnu.org/patch/
The sha256sum checksums are:
308a4983ff324521b9b21310bfc2398ca861798f02307c79eb99bb0e0d2bf980 patch-2.8.tar.gz
7f51814e85e780b39704c9b90d264ba3515377994ea18a2fabd5d213e5a862bc patch-2.8.tar.bz2
f87cee69eec2b4fcbf60a396b030ad6aa3415f192aa5f7ee84cad5e11f7f5ae3 patch-2.8.tar.xz
This release is also GPG signed. You can download the signature by appending '.sig' to the URL. If the 'gpg --verify' command fails because you don't have the required public key, then run this command to import it:
gpg --recv-keys D5BF9FEB0313653A
Key fingerprint = 259B 3792 B3D6 D319 212C C4DC D5BF 9FEB 0313 653A
NEWS since v2.7.6 (2018-02-03):
GNU/Linux platforms where time_t defaults to 32 bits.
as encouraged by POSIX.1-2024.
These bytes would otherwise cause unpredictable behavior.
and in other places where POSIX requires support for these sequences.
Use --enable-gcc-warnings=expensive if you still want it.
large sizes, possible stack overflow, I/O errors, memory exhaustion,
races with other processes, and signals arriving at inopportune moments.
The following people contributed changes to this release:
Andreas Gruenbacher (34)
Bruno Haible (5)
Collin Funk (2)
Eli Schwartz (1)
Jean Delvare (2)
Jim Meyering (1)
Kerin Millar (1)
Paul Eggert (166)
Petr Vaněk (1)
Sam James (1)
Takashi Iwai (1)
Special thanks to Paul Eggert for doing the vast majority of the work.
Regards,
Andreas Gruenbacher
29 March, 2025 06:41PM by Andreas Gruenbacher
This release is meant to fix multiple security issues that are present
in the GRUB version we use (2.06+).
Users having replaced the GNU Boot picture / logo with untrusted
pictures could have been affected if the pictures they used were
specially crafted to exploit a vulnerability in GRUB and take full
control of the computer. In general it's a good idea to avoid using
untrusted pictures in GRUB or other boot software to limit such risks
because software can have bugs (a similar issue also happened in a
free software UEFI implementation).
Users having implemented various user-respecting flavor(s) of
secure-boot, either by using GPG signatures and/or by using a GRUB
password combined with full disk encryption are also affected as these
security vulnerabilities could enable people to bypass secure-boot
schemes.
In addition there are also security vulnerabilities in file systems,
which also enable execution of code. When booting, GRUB has to load
files (like the Linux or linux-libre kernel) that are executed
anyway. But in some cases, it could still affect users.
This could happen when trying to boot from an USB key, and also having
another USB key that has a file system that was crafted to take
control of the computer.
At the time, no known exploits are known by the GNU Boot maintainers.
Why it took so long.
--------------------
The 18 February, the GRUB maintainer posted some patches on the
grub-devel mailing list in order to notify people that there were some
security vulnerabilities in GRUB that were fixed, and which commit
fixed them.
One of the GNU Boot maintainers saw these patches but didn't read the
mails and assumed that a new GRUB release was near and decided to wait
for it as this would make things easier as GRUB releases are tested in
many different situations.
However the thread posting these patches also mentioned that a new
release would take too much time and that the GRUB contributors and/or
maintainers already had a lot to deal with.
It took a while to realize the issue: a second GNU Boot maintainer saw
the GRUB security vulnerabilities later on, and at this point they
realized that nothing had happened yet on GRUB side yet and they
looked into the issue.
In addition the computer of one of the GNU Boot maintainer broke,
which also delayed the review of the GNU Boot patches meant to fix
this security issues.
These patches also contain fixes for the GNU Boot build system as well
to ensure users building GNU Boot themselves really do get an updated
GRUB version.
As this is a new release candidate, we also need help for reporting on
which computers and/or configuration it works or doesn't work,
especially because we had to update to an unreleased GRUB version to
get the fixes (see below for more details).
Other affected distributions?
-----------------------------
We started telling the Canoeboot and Libreboot maintainer about the
issue to later find out that the issue was fixed since the 18
February, and their users were also notified via a news, so everything
is good on that side.
For most 100% free distributions, using GRUB from git would be
a significant effort in testing and/or in packaging.
We notified Trisquel, Parabola and Guix and the ones who responded are
not comfortable with updating GRUB to a not-yet released git
revision. Though in the case of Parabola nothing prevent adding a new
grub-git package that has no known vulnerabilities in addition to the
existing grub package, so patches for that are welcome.
As for the other distributions, most of them do support secure boot
(by supporting UEFI secure boot), but they are probably aware of the
issue as (maintainers of) distributions like Debian or Arch Linux
either responded to the thread on the GRUB mailing list, or were
mentioned as having fixed the issue in that thread.
At the time of writing, the affected GRUBs versions seems not to be
blacklisted yet by UEFIs, so it also leaves some time to fix the
issue, and things like GRUB password can usually be bypassed easily
unless people use distributions like GNU Boot or Canoeboot and
configure both the hardware and the software to support a secure boot
scheme that respect users freedoms.
As for PureOS we just notified them in a bug report as they have UEFI
secure boot, but in another hand we don't know if they are aware of
that (they are based on Debian so it could be inherited from Debian
and not something supported/advertised), and because Purism (the
company behind PureOS) ships computers with their own secure boot
scheme (PureBoot), since it works in a very different way we are not
sure if people could be affected or not.
References and details
----------------------
This release should fix the following CVEs affecting previous GRUB
2.06: CVE-2025-0690, CVE-2025-0622, CVE-2024-45775, CVE-2024-45777,
CVE-2024-45778, CVE-2024-45779, CVE-2024-45781, CVE-2024-45782,
CVE-2024-45783, CVE-2025-0624, CVE-2025-0677, CVE-2025-0684,
CVE-2025-0685, CVE-2025-0686, CVE-2025-0689, CVE-2025-1125.
More details are available in the "[SECURITY PATCH 00/73] GRUB2
vulnerabilities - 2025/02/18" thread From Daniel Kiper (Tuesday, 18
February 2025, archived online at
https://lists.gnu.org/archive/html/grub-devel/2025-02/msg00024.html).
24 March, 2025 09:11PM by GNUtoo
Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:
While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.
I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.
I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.
What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.
First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:
To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive
tarball. My solution to this was to make use of the export-subst
feature of the .gitattributes
file. I store the file .tarball-version-git
in git containing the magic cookie like this:
$Format:%(describe)$
With this, git-archive
will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen
script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure
to figure out which version number the package is for.
We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap
, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget
downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap
and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po
files directly as po/*.po.in
and make the ./bootstrap
tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:
# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
po=$(echo $poin | sed 's/.in//')
test -f $po || cp -v $poin $po
done
ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS
Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4
macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap
took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap
to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap
. Essentially this is doing:
GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make
This goes without saying, but if you don’t test that building from a git-archive
style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive
tarball leads to a usable build.
So that wasn’t hard, was it? You should now be able to publish a minimal git-archive
tarball and users should be able to build your project from it.
I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz
replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/
containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v
) and contains a wildcard-friendly -src
substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.
Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?
The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.
Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po
translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date
timestamp, and I set it to the modification timestamps of the NEWS
file in the repository (which I set to the git commit
of the latest commit elsewhere) like this:
dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
$(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
if test -f "$$p"; then \
$(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
if cmp $$p $$p.tmp > /dev/null; then \
rm -f $$p.tmp; \
else \
mv $$p.tmp $$p; \
fi \
fi \
done
Similarily, I set a predictable modification time of the texinfo source file like this:
dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
$(AM_V_GEN)if test -e $(srcdir)/.git \
&& command -v git > /dev/null; then \
touch -m -t "$$(git log -1 --format=%cd \
--date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
fi
However I’ve realized that this needs to happen earlier and probably has to be run during ./configure
time, because the doc/version.texi
file is generated on first build before running ‘make dist
‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.
The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.
Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist
” source archives from a minimal “git-archive
” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe
‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's
covering all changes since the last release.
dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
$(AM_V_GEN)if test -e $(srcdir)/.git; then \
LC_ALL=en_US.UTF-8 TZ=UTC0 \
$(top_srcdir)/build-aux/gitlog-to-changelog \
--srcdir=$(srcdir) -- \
v$(PREV_VERSION)~.. > $(distdir)/cl-t && \
{ printf '\n\nSee the source repo for older entries\n' \
>> $(distdir)/cl-t && \
rm -f $(distdir)/ChangeLog && \
mv $(distdir)/cl-t $(distdir)/ChangeLog; } \
fi
I’m undecided about the usefulness of generated ChangeLog
files within ‘make dist
‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog
files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.
Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine
) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.
For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep
to remove some timestamps, however with compression this is no longer possible and actually the grep
command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig
file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph
, I came to re-discover the graphviz language and tool called dot
(hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist
proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:
info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
$(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
$(AM_V_at)! grep %%CreationDate $@.tmp
$(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
$(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb. Why can't 'dot' avoid setting CreationDate?
$(AM_V_at)printf '[ /ModDate ()\n /CreationDate ()\n /DOCINFO pdfmark\n' > pdfmarks
$(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
$(AM_V_at)rm -f $@.tmp pdfmarks
$(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
$(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
$(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png
Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate
as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.
Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:
0-compare:
image: alpine:latest
stage: repro
needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
script:
- cd out
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
- sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
- sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
# Confirm modern git-archive tarball reproducibility
- cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
- cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
- cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
- cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
- cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
- cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
- cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
- cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
- cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
- cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
- cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz
Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293 b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293 b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8 s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8 s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1 b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1 b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a r-guix/libidn2-2.3.8.tar.gz
I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0
helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!
24 March, 2025 11:09AM by simon
GNU Parallel 20250322 ('Have you said thank you') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
te amo gnu parallel
-- Ayleen I. C. @ayleen_ic
New in this release:
News about GNU Parallel:
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
12345678 c555f616 391c6f7c 28bf9380 44f4ec50
$ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
70727536 3428aa9e 9a136b9a 7296dfe4
$ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
23 March, 2025 12:19PM by Ole Tange
Version 3.19 of GNU mailutils is available for download. This is a bug-fixing release. Noteworthy changes are:
22 March, 2025 03:58PM by Sergey Poznyakoff
GNU DBM version 1.25 is available for download. New in this release:
This function provides a general-purpose interface for opening and creating GDBM files. It combines the possibilities of gdbm_open and gdbm_fd_open and provides detailed control over database file
locking.
The command prints the collision chains for the current bucket, or for buckets identified by its arguments.
The output of a gdbmtool command can be connected to the input of a shell command using the traditional pipeline syntax.
22 March, 2025 02:38PM by Sergey Poznyakoff
The Algol 68 programming language got a new homepage: https://www.algol68-lang.org.
We are pleased to announce the release of GNUnet 0.24.0.
GNUnet is an alternative network stack for building secure, decentralized and
privacy-preserving distributed applications.
Our goal is to replace the old insecure Internet protocol stack.
Starting from an application for secure publication of files, it has grown to
include all kinds of basic protocol components and applications towards the
creation of a GNU internet.
This is a new major release. Major versions may break protocol compatibility with the 0.23.0X versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.23.0X GNUnet network, and interactions between old and new peers will result in issues. In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.24.0 release is still only suitable for early adopters with some reasonable pain tolerance .
After almost a year of testing we believe that the meson build system is stable enough that it can be used as the default build system. In order to reduce maintenance overhead, we are planning to phase out the autotools build until the next major release. Meson shows up to 10x better development build times. It also facilitates building a single libgnunet.so for future requirements of a monolithic build on other platforms such as Android.
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/
A detailed list of changes can be found in the git log , the NEWS and the bug tracker . Noteworthy highlights are
In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.
This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, dvn, TheJackiMonster, oec, ch3, and Martin Schanzenbach.
I am happy to announce a new release of GNU poke, version 4.3.
This is a bugfix release in the 4.x series.
See the file NEWS in the distribution tarball for a list of issues
fixed in this release.
The tarball poke-4.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-4.3.tar.gz.
> GNU poke (http://www.jemarch.net/poke) is an interactive, extensible
> editor for binary data. Not limited to editing basic entities such
> as bits and bytes, it provides a full-fledged procedural,
> interactive programming language designed to describe data
> structures and to operate on them.
Thanks to the people who contributed with code and/or documentation to this release.
Happy poking!
Mohammad-Reza Nabipoor
10 March, 2025 11:05PM by Mohammad-Reza Nabipoor
Richard Stallman was interviewed during his visit to the University of Bozen-Bolzano, Italy, in February. Clear questions with short, simply worded answers suitable for students and newcomers to the free software world.
02 March, 2025 08:29PM by Dora Scilipoti
Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.24.tar.gz
New in this release:
25 February, 2025 02:05PM by Bruno Haible
GNU Parallel 20250222 ('Grete Tange') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
Use GNU Parallel and thank me later
-- pratikbin | NodeOps @pratikbin
New in this release:
News about GNU Parallel:
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
12345678 c555f616 391c6f7c 28bf9380 44f4ec50
$ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
70727536 3428aa9e 9a136b9a 7296dfe4
$ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
22 February, 2025 05:28PM by Ole Tange
Hi, All:
Please join me in welcoming our new member:
User Details:
-------------
Name: LS-Shandong
Login: ls_shandong
Email: ls-shandong@outlook.com
I wish LS-Shandong a wonderful journey in GNU CTT.
Happy Hacking
wxie
18 February, 2025 12:47AM by Wensheng XIE
In a decisive step towards the modernization of healthcare in the country, the Dr. Hugo Mendoza Pediatric Hospital (HPHM) has officially presented its new GNU Health Management System. This digital platform, designed to optimize both medical care and administrative processes, marks a significant advance in the digital transformation of pediatric services in the Dominican Republic.
The launch of the innovative system was attended by José Miguel Rodríguez, deputy administrative director of the hospital, who highlighted the importance of digitalization in improving healthcare services. “We are starting a new era in children's health. With GNU Health, doctors will have faster and more efficient access to medical information, which will enable more informed decisions and ensure safer and more timely care,” said Rodríguez during the event.
Dhamelisse Then, director of the hospital, highlighted the impact this new tool will have on the quality of the services offered. “The integration of GNU Health not only improves service, but also reinforces our commitment to innovation and excellence in pediatric care. This advance will be fundamental for the lives of thousands of children and their families,” said Then.
Tranlated from source:
https://cdn.com.do/nacionales/hospital-hugo-mendoza-lanza-innovador-software-para-transformar-la-salud-pediatrica/
04 February, 2025 12:34PM by Luis Falcon
This is a minor release
Changes in 2.10.1:
- update the Spanish translation
- fix in gtypits.typ, to jump from the global menu to the menus of the
individual lessons
- small fix to u.typ lesson
- remove cmdline.c and cmdline.h files from the git repo; this will
only affect those who build from git sources; dependency to gengetopt
added to README.git
- include the version.sh file, so autoconf can always update project
version
Addendum: since v2.10, gtypist is saving configuration setting in the
file .gtypistrc
Sources for this release can be downloaded here:
https://ftp.gnu.org/gnu/gtypist/gtypist-2.10.1.tar.gz
03 February, 2025 02:58PM by Mihai Gătejescu
This is to announce diffutils-3.11, a stable release.
Special thanks to Paul Eggert for doing the vast majority of the work and
to Bruno Haible for his many changes here and his tons of work tending gnulib.
There have been 252 commits by 5 people in the 89 weeks since 3.10.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (12)
Collin Funk (3)
Gleb Fotengauer-Malinovskiy (1)
Jim Meyering (26)
Paul Eggert (210)
Jim
[on behalf of the diffutils maintainers]
==================================================================
Here is the GNU diffutils home page:
https://gnu.org/s/diffutils/
Here are the compressed sources:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.11.tar.gz (3.3MB)
https://ftp.gnu.org/gnu/diffutils/diffutils-3.11.tar.xz (1.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.11.tar.gz.sig
https://ftp.gnu.org/gnu/diffutils/diffutils-3.11.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
bc8791022b18a34c7ee9c3079e414f843de0e1a9 diffutils-3.11.tar.gz
yAo8K/h+JS/n1gW4umv5KNdakLVfO/z3xKTzN+xi/DE= diffutils-3.11.tar.gz
1cf58ac440fc279b363169a17de3662e03bb266d diffutils-3.11.tar.xz
pz7wX+N91YX32HBo5KBjl2BBn4EBOL11xh3aofniEx4= diffutils-3.11.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify diffutils-3.11.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=diffutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify diffutils-3.11.tar.gz.sig
This release is based on the diffutils git repository, available as
git clone https://git.savannah.gnu.org/git/diffutils.git
with commit 3f326ae3ea7556e35152e13f01a0a4d8b8b4bc70 tagged as v3.11.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.11
or run this command from a git-cloned diffutils directory:
git shortlog v3.10..v3.11
This release was bootstrapped with the following tools:
Autoconf 2.72.47-21cb
Automake 1.17.0.91
Gnulib 2025-01-31 553ab924d2b68d930fae5d3c6396502a57852d23
NEWS
* Noteworthy changes in release 3.11 (2025-02-02) [stable]
** Improvements
Programs now quote file names more consistently in diagnostics.
For example; "cmp 'none of' /etc/passwd" now might output
"cmp: EOF on ‘none of’ which is empty" instead of outputting
"cmp: EOF on none of which is empty". In diagnostic messages
that traditionally omit quotes and where backward compatibility
seems to be important, programs continue to omit quotes unless
a file name contains shell metacharacters, in which case programs
use shell quoting. For example, although diff continues to output
"Only in a: b" as before for most file names, it now outputs
"Only in 'a: b': 'c: d'" instead of "Only in a: b: c: d" because the
file names 'a: b' and 'c: d' contain spaces. For compatibility
with previous practice, diff -c and -u headers continue to quote for
C rather than for the shell.
diff now outputs more information when symbolic links differ, e.g.,
"Symbolic links ‘d/f’ -> ‘a’ and ‘e/f’ -> ‘b’ differ", not just
"Symbolic links d/f and e/f differ". Special files too, e.g.,
"Character special files ‘d/f’ (1, 3) and ‘e/f’ (5, 0) differ", not
"File d/f is a character special file while file e/f is a character
special file".
diff's --ignore-case (-i) and --ignore-file-name-case options now
support multi-byte characters. For example, they treat Greek
capital Δ like small δ when input uses UTF-8.
diff now supports multi-byte characters when treating white space.
In options like --expand-tabs (-t), --ignore-space-change (-b) and
--ignore-tab-expansion (-E), diff now recognizes non-ASCII space
characters and counts columns for non-ASCII characters.
** Bug fixes
cmp -bl no longer omits "M-" from bytes with the high bit set in
single-byte locales like en_US.iso8859-1. This fix causes the
behavior to be locale independent, and to be the same as the
longstanding behavior in the C locale and in locales using UTF-8.
[bug introduced in 2.9]
cmp -i N and -n N no longer fail merely because N is enormous.
[bug present since "the beginning"]
cmp -s no longer mishandles /proc files, for which the Linux kernel
reports a zero size even when nonempty. For example, the following
shell command now outputs nothing, as it should:
cp /proc/cmdline t; cmp -s /proc/cmdline t || echo files differ
[bug present since "the beginning"]
diff -E no longer mishandles some input lines containing '\a', '\b',
'\f', '\r', '\v', or '\0'.
[bug present since 2.8]
diff -ly no longer mishandles non-ASCII input.
[bug#64461 introduced in 2.9]
diff - A/B now works correctly when standard input is a directory,
by reading a file named B in that directory.
[bug present since "the beginning"]
diff no longer suffers from race conditions in some cases
when comparing files in a mutating file system.
[bug present since "the beginning"]
** Release
distribute gzip-compressed tarballs once again
03 February, 2025 05:09AM by Jim Meyering
We are happy to announce the release of GNU gprofng-gui, version 2.0.
gprofng GUI is a full-fledged graphical interface for the gprofng
profiler, which is part of the GNU binutils.
The tarball gprofng-gui-2.0.tar.gz is now available at
https://ftp.gnu.org/gnu/gprofng-gui/gprofng-gui-2.0.tar.gz.
--
Vladimir Mezentsev
Jose E. Marchesi
28 January 2025
28 January, 2025 04:48PM by Jose E. Marchesi
Today we're looking at the results from the Contributor section of the Guix User and Contributor Survey (2024). The goal was to understand how people contribute to Guix and their overall development experience. A great development experience is important because a Free Software project's sustainability depends on happy contributors to continue the work!
See Part 1 for insights about Guix adoption, and Part 2 for users overall experience. With over 900 participants there's lots of interesting insights!
The survey defined someone as a Contributor if they sent patches of any form. That includes changes to code, but also other improvements such as documentation and translations. Some Guix contributors have commit access to the Guix repository, but it's a much more extensive group than those with commit rights.
Of the survey's 943 full responses, 297 participants classified themselves as current contributors and 58 as previous contributors, so 355 participants were shown this section.
The first question was (Q22), How many patches do you estimate you've contributed to Guix in the last year?
Number of patches | Count | Percentage |
---|---|---|
1 — 5 patches | 190 | 61% |
6 — 20 patches | 60 | 19% |
21 — 100 patches | 36 | 12% |
100+ patches | 27 | 9% |
None, but I've contributed in the past | 42 | N/A |
Note that the percentages in this table, and throughout the posts, are rounded up to make them easier to refer to.
The percentage is the percentage of contributors that sent patches in the last year. That means the 42 participants who were previous contributors have been excluded.
Figure 13 shows this visually:
As we can see many contributors send a few patches (61%), perhaps updating a package that they personally care about. At the other end of the scale, there are a few contributors who send a phenomenal number of patches.
It's interesting to investigate the size of Guix's contributor community. While running the survey I did some separate research to find out the total number of contributors. I defined an Active contributor as someone who had sent a patch in the last two years, which was a total of 454 people. I deduplicated by names, but as this is a count by email address there may be some double counting.
This research also showed the actual number of patches that were sent by contributors:
Number of patches | Count | Percentage of Contributors |
---|---|---|
1 — 5 patches | 187 | 41% |
6 — 20 patches | 102 | 22% |
21 — 100 patches | 91 | 20% |
100+ patches | 74 | 16% |
Figure 14 shows this:
Together this give us an interesting picture of the contributor community:
The survey also asked contributors (Q23), How do you participate in the development of Guix?
Type of contribution | Count | Percentage |
---|---|---|
Develop new code (patches services, modules, etc) | 312 | 59% |
Review patches | 65 | 12% |
Triage, handle and test bugs | 65 | 12% |
Write documentation | 38 | 7% |
Quality Assurance (QA) and testing | 23 | 4% |
Organise the project (e.g. mailing lists, infrastructure etc) | 16 | 3% |
Localise and translate | 12 | 2% |
Graphical design and User Experience (UX) | 2 | 0.4% |
Figure 15 shows this as a pie chart (upping my game!):
Of course, the same person can contribute in multiple areas: as there were 531 responses to this question, from 355 participants, we can see that's happening.
Complex projects like Guix need a variety of contributions, not just code. Guix's web site needs visual designers who have great taste, and certainly a better sense of colour than mine! We need documentation writers to provide the variety of articles and how-tos that we've seen users asking for in the comments. The list goes on!
Unsurprisingly, Guix is code heavy with 60% of contributors focusing in this area, but it's great to see that there are people contributing across the project. Perhaps there's a role you can play? ... yes, you reading this post!
FOSS projects exist on a continuum of paid and unpaid contribution. Many projects are wholly built by volunteers. Equally, there are many large and complex projects where the reality is that they're built by paid developers — after all, everyone needs to eat!
To explore this area the survey then asked (Q24), Are you paid to contribute to Guix?
The results show:
Type of compensation | Count | Percentage |
---|---|---|
I'm an unpaid volunteer | 328 | 94% |
I'm partially paid to work on Guix (e.g. part of my employment or a small grant) | 19 | 5% |
I'm full-time paid to work on Guix | 1 | 0.3% |
No answer | 7 | N/A |
We can see this as Figure 16 :
Some thoughts:
Ensuring contributors continue to be excited and active in the project is important for it's health. Ultimately, fewer developers means less can be done. In volunteer projects there's always natural churn as contributor's lives change. But, fixing any issues that discourages contributors is important for maintaining a healthy project.
Question 25 was targeted at the 59 participants who identified themselves as Previous Contributors. It asked, You previously contributed to Guix, but stopped, why did you stop?
The detailed results are:
Category | Count | Percentage of Previous Contributors |
---|---|---|
External circumstances (e.g. other priorities, not enough time, etc) | 28 | 35% |
Response to contributions was slow and/or reviews arduous | 12 | 15% |
The contribution process (e.g. email and patch flow) | 11 | 14% |
Developing in Guix/Guile was too difficult (e.g. REPL/developer tooling) | 6 | 8% |
Guix speed and performance | 3 | 4% |
Project co-ordination, decision making and governance | 2 | 3% |
Lack of appreciation, acknowledgement and/or loneliness | 2 | 3% |
Negative interactions with other contributors (i.e. conflict) | 2 | 3% |
Burnt out from contributing to Guix | 2 | 3% |
Learning Guix internals was too complex (e.g. poor documentation) | 1 | 1% |
Social pressure of doing reviews and/or turning down contributions | 1 | 1% |
Other | 10 | 13% |
Figure 17 shows this graphically:
There were 80 answers from the 59 participants so some participants chose more than one reason.
Q26 asked contributors to grade their likelihood of contributing further, this is essentially a satisfaction score.
The question was, If you currently contribute patches to Guix, how likely are you to do so in the future?
Category | Count | Percentage |
---|---|---|
Definitely not | 7 | 2% |
Probably not | 34 | 10% |
Moderately likely | 80 | 23% |
Likely | 111 | 31% |
Certain | 123 | 35% |
Figure 18 shows this graphically:
Out of the audience of current and previous contributors, 355 in total:
The survey then explored areas of friction for contributors. Anything that reduces friction should increase overall satisfaction for existing contributors.
The question (Q27) was, What would help you contribute more to the project?
Answer | Count | Percentage |
---|---|---|
Timely reviews and actions taken on contributions | 203 | 20% |
Better read-eval-print loop (REPL) and debugging | 124 | 12% |
Better performance and tuning (e.g. faster guix pull) | 102 | 10% |
Better documentation on Guix's internals (e.g. Guix modules) | 100 | 10% |
Guidance and mentoring from more experienced contributors | 100 | 10% |
Addition of a pull request workflow like GitHub/Gitlab | 90 | 9% |
Improved documentation on the contribution process | 77 | 8% |
Nothing, the limitations to contributing are external to the project | 65 | 7% |
More acknowledgement of contributions | 40 | 4% |
More collaborative interactions (e.g. sprints) | 41 | 4% |
Other | 56 | 6% |
Figure 19 bar chart visualises this:
The 355 contributors selected 933 options for this question, so many of them selected multiple aspects that would help them to contribute more to the project.
Conclusions we can draw are:
Jumping ahead, the last question of the contributor section (Q30) was a comment box. It asked, Is there anything else that you would do to improve contributing to Guix?
The full list of comments from Q27, and Q30 are available and worth reading (or at least scanning!).
Looking across all of them I've created some common themes - picking a couple of example comments to avoid repetition:
It's common in FOSS projects to focus on the technical issues, but Free Software is a social endeavour where organisational and social aspects are just as important. Q28 focused on the social and organisational parts of contribution by asking, What organisational and social areas would you prioritise to improve Guix?
This was a ranked question where participants had to prioritise their top 3. The rationale for asking it in this way was to achieve prioritisation.
It's useful to look at the results in two ways, first the table where participants set their highest priority (Rank 1):
Category | Count | Percentage |
---|---|---|
Improve the speed and capacity of the contribution process | 213 | 63% |
Project decision making and co-ordination | 36 | 11% |
Fund raising | 22 | 7% |
Request-for-comments (RFC) process for project-wide decision making | 17 | 5% |
Regular releases (i.e. release management) | 19 | 6% |
In-person collaboration and sprints | 8 | 2% |
Promotion and advocacy | 23 | 7% |
Out of the 355 participants in this section, 338 answered this question and marked their highest priority.
Figure 20 shows it as a pie chart:
This second table shows how each category was prioritied across all positions:
Category | Rank 1 | Rank 2 | Rank 3 | Overall priority |
---|---|---|---|---|
Project decison making and co-ordination | 2 | 1 | 3 | 1 |
Promotion and advocacy | 3 | 3 | 1 | 2 |
Fund raising | 4 | 5 | 2 | 3 |
Request-for-comments (RFC) process for project-wide decision making | 6 | 2 | 4 | 4 |
Improve the speed and capacity of the contribution process | 1 | 6 | 6 | 5 |
Regular releases (i.e. release management) | 5 | 4 | 5 | 6 |
In-person collaboration and sprints | 7 | 7 | 7 | 7 |
Figure 21 shows this as a stacked bar chart. Each of the categories is the position for a rank (priority), so the smallest overall priority is the most important:
Looking at these together:
The partner question was Q29 which asked, What technical areas would you prioritise to improve Guix overall?
This was also a ranked question where participants had to prioritise their top 3.
Category | Count | Percentage |
---|---|---|
Debugging and error reporting | 63 | 18% |
Making the latest version of packages available (package freshness) | 50 | 14% |
Automate patch testing and acceptance | 42 | 12% |
Runtime performance (speed and memory use) | 36 | 10% |
Package reliability (e.g. installs and works) | 30 | 9% |
Contribution workflow (e.g. Pull Requests) | 26 | 8% |
More packages (more is better!) | 23 | 7% |
Improving Guix's modules | 20 | 6% |
Project infrastructure (e.g. continuous integration) | 20 | 6% |
Guix System services | 12 | 3% |
Guix Home services | 10 | 3% |
Stable releases (e.g. regular tested releases) | 8 | 2% |
Focused packages (fewer is better!) | 5 | 1% |
There were 345 answers for the highest priority, 327 for the second rank and 285 for the third rank — so not as significant a drop-off as for the social question. Figure 22 shows this as a bar chart:
As before I've converted them to priorities in each rank. The smallest overall score is the highest priority:
Category | Rank 1 | Rank 2 | Rank 3 | Overall priority |
---|---|---|---|---|
Automate patch testing and acceptance | 3 | 2 | 1 | 1 |
Runtime performance (speed and memory use) | 4 | 1 | 3 | 2 |
Debugging and error reporting | 1 | 4 | 7 | 3 |
Project infrastructure (e.g. continuous integration) | 9 | 3 | 2 | 4 |
Contribution workflow (e.g. Pull Requests) | 6 | 5 | 5 | 5 |
Making the latest version of packages avalable (package freshness) | 2 | 8 | 6 | 6 |
Package reliability (e.g. installs and works) | 5 | 7 | 4 | 7 |
More packages (more is better!) | 7 | 6 | 10 | 8 |
Guix Home services | 11 | 10 | 8 | 9 |
Improving Guix's modules | 8 | 12 | 9 | 10 |
Guix System services | 10 | 9 | 11 | 11 |
Stable releases (e.g. regular tested releases) | 12 | 11 | 12 | 12 |
Focused packages (fewer is better!) | 13 | 13 | 13 | 13 |
Figure 23 shows this as a stacked bar chart.
Some things that are interesting from this question:
That completes our review of the contributor section! Here are the key insights I draw:
We've really squeezed the juice from the lemon over these three posts — but maybe you'd like to dig into the data and do your own analysis? If so head over to the Guix Survey repository where you'll find all the data available to create your own plots!
28 January, 2025 01:00PM by Steve George
The results from the Guix User and Contributor Survey (2024) are in and we're digging into them in a series of posts! Check out the first post for the details of how users initially adopt Guix, the challenges they find while adopting it and how important it is in their environment. In this part, we're going to cover how use of Guix matures, which parts are the most loved and lots of other details.
As a reminder there were 943 full responses to the survey, of this 53% were from users and 32% were from contributors.
The middle section of the Survey explored how users relationship with Guix matured, which parts they used and where they struggled. Question 11 asked, Which parts of Guix have you used on top of another Linux distribution?
As a reminder a third (36%) of participants adopted Guix by using it as a package manager on top of another GNU/Linux distribution. The detailed results were:
Capability | Use | Stopped | Never used |
---|---|---|---|
Package manager and packages (guix package) | 48% | 26% | 24% |
Dotfiles and home environment management (guix home) | 17% | 11% | 70% |
Isolated development environments (guix shell) | 41% | 18% | 39% |
Package my own software projects | 28% | 9% | 61% |
Deployment tool (guix deploy, guix pack) | 13% | 7% | 78% |
Guix System (i.e. VM on top of your distro) | 15% | 15% | 68% |
Note that all the percentages in this table, and throughout the posts are rounded to make them easier to refer to.
The next question (Q12) asked participants, Which parts of Guix have you used on top of Guix System?
As a reminder, an earlier question (Q5) determined that 46% initially adopted Guix as a GNU/Linux distro in a graphical desktop configuration, and 5% as a GNU/Linux distro in a server configuration. The results:
Capability | Use | Stopped | Never used |
---|---|---|---|
Package manager and packages (guix package) | 64% | 17% | 17% |
Dotfiles and home environment manager (guix home) | 48% | 9% | 41% |
Isolated development environments (guix shell) | 36% | 10% | 21% |
Package my own software projects | 40% | 9% | 49% |
Deployment tool (guix deploy, guix pack) | 19% | 8% | 71% |
This gives us an interesting picture of how Guix usage develops:
The survey then asked (Q15), How have you run Guix System?
This was a multiple choice question, so in total there were 1508 answers from the 943 participants, consequently we can assume that some users deploy Guix System in multiple configurations:
Deployment type | Count | Percentage |
---|---|---|
Graphical desktop in a VM | 275 | 29% |
Graphical desktop on laptop/workstation hardware | 691 | 73% |
Server on server hardware | 223 | 24% |
Server in a VM (e.g. KVM) | 169 | 18% |
Server in a container (e.g. Docker/Singularity) | 53 | 6% |
Public Cloud (e.g. AWS) | 57 | 6% |
Other | 40 | 4% |
In the Other category there were mentions of using it on different SOC boards (e.g. RockPro64), on WSL2 and on different hosting providers (e.g. Digital Ocean, Hetzner).
Figure 7 shows the break down as a bar chart:
Some thoughts from this question:
The survey then asked (Q16), Which architectures do you use Guix on?
Again this was multiple choice, there were 1192 answers from 943 completed surveys:
Category | Count | Percentage |
---|---|---|
x86_64 (modern Intel/AMD hardware) | 925 | 98% |
IA-32 (32-bit i586 / i686 for older hardware) | 25 | 3% |
ARM v7 (armhf 32-bit devices, Raspberry Pi 1 - Zero) | 36 | 4% |
AArch64 (ARM64, Raspberry Pi Zero 2, 3 and above) | 177 | 19% |
POWER9 (powerpc64le) | 15 | 2% |
IA-32 with GNU/Hurd (i586-gnu) | 14 | 1% |
As we might expect x86_64 is the most popular, but there are quite a few AArch64 users as well. There are various comments in the survey about challenges when using different architectures (e.g substitute availability, cross-compiling challenges), see the linked comments throughout these posts for more.
Proprietary drivers is an interesting topic in the Guix community. For Q17 the survey asked, Do you use proprietary drivers in your Linux deployments?
The goal was to understand driver usage across all Linux usage, whether when using Guix or another Distribution. As this was a multiple choice question, there were 1275 answers from the 943 participants.
Category | Count | Percentage |
---|---|---|
No, I don't use proprietary drivers | 191 | 20% |
Yes, I use Nonguix as part of Guix System | 622 | 66% |
Yes, I use proprietary drivers on other GNU/Linux distributions | 462 | 49% |
Figure 8 shows it as a bar chart:
The next question was (Q18), Do you use other methods and channels to install applications?
One of the advantages of Guix is that it's a flexible system where users can create their own packages and share them with the community. Additionally, there are other methods for installing and using applications such as Flatpak. However, we already know that during adoption some users struggle to find the applications that they need. This question explores whether that changes as usage matures.
The results were:
Source | Count | Percentage |
---|---|---|
I only use applications from Guix | 234 | 25% |
Packages from my host Linux distro | 352 | 37% |
Nix service on Guix System | 124 | 13% |
Nonguix channel (proprietary apps and games) | 607 | 64% |
Guix Science channel | 127 | 14% |
My own Guix channel | 442 | 47% |
Guix channels provided by other people | 303 | 32% |
Flatpak | 334 | 35% |
Other | 111 | 12% |
Figure 9 shows this visually:
Some thoughts:
The survey asked participants (Q19), How satisfied are you with Guix as a Guix user?
This is probably the most important question in the entire survey, since happy users will continue to use and contribute to the project.
Category | Count | Percentage |
---|---|---|
Very dissatisfied | 31 | 3% |
Dissatisfied | 77 | 8% |
Neutral | 180 | 19% |
Satisfied | 463 | 49% |
Very satisfied | 192 | 20% |
The bar chart is Figure 10:
For Q20 the survey asked, Which areas limit your satisfaction with Guix?
The detailed results:
Category | Count | Percentage |
---|---|---|
Difficulties with Guix tools user experience | 192 | 20% |
Difficulties using declarative configuration | 157 | 17% |
Missing or incomplete services (whether Guix Home or Guix System) | 374 | 40% |
Overall Linux complexity (i.e. not specific to Guix) | 92 | 10% |
Hardware drivers not included | 312 | 33% |
Guix runtime performance (e.g. guix pull) | 449 | 48% |
Reference documentation (i.e. the manual) | 195 | 21% |
Shortage of informal guides, examples and videos | 369 | 39% |
Error messages and debugging | 372 | 39% |
Nothing, it's perfect! | 40 | 4% |
Other | 213 | 23% |
As a visual graph:
The first thing to note is that there were 2765 entries from our 943 survey completions, so users have challenges in multiple categories.
There were also 213 comments in the Other category, the full list of comments is available. As before I've grouped the comments — at this point we're starting to see consistency in the grouping so to avoid a lot of repetition I've only put in one example from each one:
Not all comments fit into a specific theme, I've pulled out some other interesting ones:
The survey then asked, (Q21) Which areas should Guix's developers improve so you can use Guix more?
This question was done as a ranking question where participants had to prioritise their top 3. The rationale for asking it in this way was to achieve clarity over prioritisation.
It's useful to look at this in two ways, first the table where participants ranked their highest priority:
Area — Rank 1 | Count | Percentage |
---|---|---|
Making the latest versions of packages available (package freshness) | 149 | 16% |
Performance and tuning (faster guix pull) | 112 | 12% |
Make Guix easier to learn (more docs!) | 105 | 11% |
Package reliability (e.g. installs and works) | 92 | 10% |
Hardware support (drivers) | 91 | 10% |
More packages (more is better!) | 87 | 9% |
Software developer tooling (guix shell with editors, debuggers, etc) | 58 | 6% |
Make Guix easier to use | 57 | 6% |
Guix System services | 37 | 4% |
Stable releases (e.g. regular tested releases) | 35 | 4% |
Community and communications | 33 | 4% |
Guix Home services | 24 | 3% |
Focused high-quality packages (fewer is better!) | 15 | 2% |
This second table shows how each element was ranked across all positions, reordered to show the overall prioritisation:
Area | Rank 1 | Rank 2 | Rank 3 | Overall score |
---|---|---|---|---|
Performance and tuning (faster guix pull) | 2 | 1 | 1 | 4 |
Make Guix easier to learn (more docs!) | 3 | 2 | 2 | 7 |
Making the latest versions of packages available (package freshness) | 1 | 4 | 3 | 8 |
More packages (more is better!) | 6 | 3 | 4 | 13 |
Package reliability (e.g. installs and works) | 4 | 5 | 6 | 15 |
Hardware support (drivers) | 5 | 6 | 7 | 18 |
Software developer tooling (guix shell with editors, debuggers, etc) | 7 | 7 | 5 | 19 |
Guix System services | 9 | 10 | 8 | 27 |
Make Guix easier to use | 8 | 9 | 11 | 28 |
Guix Home services | 12 | 8 | 9 | 29 |
Community and communications | 11 | 12 | 10 | 33 |
Stable releases (e.g. regular tested releases | 10 | 11 | 13 | 34 |
Focused high-quality packages (fewer is better!) | 13 | 13 | 12 | 38 |
Some thoughts on what this means:
The next section of the survey was for Contributors, we'll cover that in the third post in the series. After the contribution section Q32 asked all users, How likely are you to financially support the Guix project?
As a volunteer project, with no corporate sponsors, the rationale for asking this question is that some aspects of the project (e.g. infrastructure and sponsored work) require finance. The results were:
Category | Count | Percentage |
---|---|---|
Unable (e.g. don't have money to do so) | 280 | 30% |
Would not (e.g. have the money to do so, but would not) | 40 | 4% |
Unlikely | 145 | 15% |
Moderately likely | 341 | 36% |
Very likely | 133 | 14% |
No answer | 4 | 0.42% |
As a graphical bar chart:
The results tell us that about 50% of users would be willing and able to financially contribute to Guix. There's also a significant set of users who are unable to do so, and one of the clear benefits of Free Software is that we can all use it without charge!
Having asked lots of structured questions and ones about challenges the last question (Q33) was, What do you love about Guix?
There were 620 answers, so 65% of the participants wrote something — that's a lot of love for Guix!
There were lots of positive comments about how friendly and helpful the Guix community is; the joys of using Scheme/Lisp and Guile; the importance of user-focused Free Software; and the benefits of the declarative approach.
All the comments are available to read, and I encourage you to have a scroll through them as they're very uplifting!
A few I pulled out:
In this post we've looked at the questions the survey asked participants about their use of Guix. And as a reminder, there were over 900 participants who completed the survey.
The main conclusions I draw from this part are:
If you missed it, the first post in this series covers how users adopt Guix. And, the next post will cover how Contributors interact with the project.
24 January, 2025 12:00PM by Steve George
GNU Parallel 20250122 ('4K-AZ65') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
GNU Parallel too. It is my map/reduce tool with built in support to retry failed jobs.
-- Dhruva @mechanicker.bsky.social
New in this release:
News about GNU Parallel:
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
21 January, 2025 11:55PM by Ole Tange
Next week will be FOSDEM time for Guix! As in previous years, a sizable delegation of Guix community members will be in Brussels. Right before FOSDEM, about sixty of us will gather on January 30–31 for the now traditional Guix Days!
In pure unconference style, we will self-organize and discuss and/or
hack on hot topics: drawing lessons from the user & contributor
survey,
improving the contributor workflow, sustaining our infrastructure,
improving governance and processes, writing the build daemon in Guile,
optimizing guix pull
, Goblinizing the Shepherd… there’s no shortage
of topics!
This time we’ve definitely reached the maximum capacity of our venue so please do not just show up if you did not register. Next year we’ll have to find a larger venue!
As for FOSDEM itself, here’s your agenda if you want to hear about Guix and related projects, be it on-line or on-site.
On Saturday, February 1st, in the Open Research track:
On Sunday, February 2nd, do not miss the amazing Declarative & Minimalistic Computing track! It will feature many Guile- and Guix-adjacent talks, in particular:
But really, there’s a lot more to see in this track, starting with talks by our Spritely friends on web development with Guile and Hoot by David Thompson, a presentation of the Goblins distributed computing framework by Jessica Tallon, and one on Spritely’s vision by Christine Lemmer-Webber herself (Spritely will be present in other tracks too, check it out!), as well as a talk by Andy Wingo on what may become Guile’s new garbage collector.
Also on Sunday, February 2nd, jgart (Jorge Gomez) will be presenting a survey of Immutable Linux distributions at the Distributions track which will include RDE.
Good times ahead!
Guix Days graphics are copyright © 2024 Luis Felipe López Acevedo, under CC-BY-SA 4.0, available from Luis’ Guix graphics repository.
21 January, 2025 08:30AM by Ludovic Courtès
The initial injustice of proprietary software often leads to further injustices: malicious functionalities.
The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.
Apple's Operating Systems Are Malware
Microsoft's Software is Malware
17 January, 2025 07:04PM by Rob Musial