Skip to content

Latest commit

 

History

History
1607 lines (1196 loc) · 67 KB

File metadata and controls

1607 lines (1196 loc) · 67 KB

December 2025

FAQ - Frequently Asked Questions

General

Does Back in Time support full system backups?

Back in Time is suited for file-based backups.

A full system backup is neither supported nor recommended (even though you could use Back in Time (root) and include your root folder \) because

  • Mounted file systems (even remote locations)
  • the backup needed to be done from within the running system
  • Linux kernel special files (eg. /proc) must be excluded
  • locked or open files (in an inconsistent state) must be handled
  • backups of additional disk partitions (bootloader, EFI...) are required to be able to boot
  • a restore cannot overwrite the running system (where the backups software is running) without the risk of crashes or losing data (for that a restore must be done from a separate boot device normally)
  • ...

For full system backups look for

  • a disk imaging ("cloning") solution (eg. Clonezilla)
  • file-based backup tools that are designed for this (eg. Timeshift)

Does Back in Time support backups on cloud storage like OneDrive or Google Drive?

Cloud storage as backup source or target is not support because Back in Time uses rsync as backend for file transfer and therefore a locally mounted file system or a ssh connection is required. Neither is supported by cloud storage.

Even with native support for mounting a cloud storage, most of the time it won't work because of limited support for 'special' file access which is used by BiT (eg. Linux hardlinks, atime).

Typically "locally mounted" cloud storage uses a web-based API (REST-API) which does not support rsync.

For a discussion about this topic see Backup on OneDrive or Google Drive.

Where is the log file?

There are three distinct logs generated:

  1. The backup log contains messages specific to a particular backup at a given time. It is stored within each backup and can be accessed through the GUI.

  2. The restore log contains messages specific to a particular restore process. It is displayed in the GUI after each restore. It is also located in the folder ~/.local/share/backintime/ and is named restore_.log for the main profile, restore_2.log for the second, and so forth.

  3. The application log is generated using the syslog feature of the operating system. See How to read log entries? for further details.

How to read log entries?

Both the backup and restore log files are plain text files and can be read accordingly. Refer to Where is the log file?. The application log is generated via syslog using the identifier backintime. Depending on the version of Back In time and the GNU/Linux distribution used, there are three ways to get the log entries.

  1. On modern systems:

    journalctl --identifier backintime

  2. With an older Back In Time version (1.4.2 or older):

    journalctl --grep backintime

  3. If the error message journalctl: command not found appears, directly examine the syslog files:

    sudo grep backintime /var/log/syslog

How to move backups to a new hard-drive?

There are three different solutions:

  1. Clone the drive with dd and enlarge the partition on the new drive to use all space. This will destroy all data on the destination drive!

     sudo dd if=/dev/sdbX of=/dev/sdcX bs=4M

    where /dev/sdbX is the partition on the source drive and /dev/sdcX is the destination drive

    Finally use gparted to resize the partition.

  2. Copy all files using rsync -H

     rsync -avhH --info=progress2 /SOURCE /DESTINATION
  3. Copy all files using tar

    cd /SOURCE; tar cf - * | tar -C /DESTINATION/ -xf -

Make sure that your /DESTINATION contains a folder named backintime, which contains all the backups. BIT expects this folder, and needs it to import existing backups.

How to move a large directory in the backup source without duplicating the files in the backup?

If you move a file/folder in the source ("include") location that is backed-up by BIT it will treat this like a new file/folder and create a new backup file for it (not hard-linked to the old one). With large directories this can fill up your backup drive quite fast.

You can avoid this by moving the file/directory in the last backup too:

  1. Create a new backup

  2. Move the original directory

  3. Manually move the same folder inside BiTs last backup in the same way you did with the original folder

  4. Create a new backup

  5. Remove the next to last backup (the one where you moved the directory manually) to avoid problems with permissions when you try to restore from that backup

How does Back In Time compare with Timeshift?

Back In Time and Timeshift are both Linux application that provides back up functionality.

  1. Similarity

    • Both programs are backup tools for Linux and they create backups at a specific time.
    • For both programs, backups are taken using rsync and hard-links, while Common files are shared between backups which saves disk space.
    • Both programs support GUI and CLI
    • Both programs allow you to schedule regular backups. You can also disable scheduled backups completely and create backups manually when required
  2. Back In Time

    • It is designed to protect user data including any folders or files.
    • It backs up certain folders and files that you want to protect. Modified files are transferred, while unchanged files are linked to the new folder. You can restore certain files and folders.
    • It's great for protecting your personal data
  3. TimeShift

    • It is designed for system backups which allows restoring whole Linux system to a previous state without affecting any user data.
    • It backs up system files, not including any personal data unless user explicitly configured.
    • It's good for restoring your system after an update failure or configuration change.

Additional features beside the GUI and benefits of using BIT

Back In Time stores the user and group name which will make it possible to restore permissions even if UID/GID changed. Additionally current user is stored. So if the User/Group doesn't exist on the system during restore it will restore to the old UID/GID.

  • Inhibit suspend/hibernate during backup creation
  • Shutdown system after finish
  • Remove & Retention policies to keep/remove old backups on reasonable rules
  • Support for Plugins and user defined callback scripts

Backups (snapshots)

Backup or Snapshot?

Until Back In Time version 1.6.0 the term snapshot was used, instead of backup. Beginning with version 1.6.0 that term was rephrased into backup. The reason was to not giving the impression that Back In Time does create images of storage volumes.

Does Back In Time create incremental or full backups?

Back In Time does use rsync and its --hard-links feature. Because of that each backup is technically a full backup (contains each file) but copies only the really changed files (to save disk space) and "reuses" unchanged files by setting a so-called "hard-link".

In technical terms it is not an incremental backups.

How do backups with hard-links work?

From the answer on Launchpad to the question Does auto remove smart mode merge incremental backups?

If you create a new file on a Linux filesystem (e.g. ext3) the data will have a unique number that is called inode. The path of the file is a link to this inode (there is a database which stores which file point to which inode). Also every inode has a counter for how many links point to this inode. After you created a new file the counter is 1.

Now you make a new hardlink. The filesystem now just has to store the new path pointing to the existing inode into the database and increase the counter of our inode by 1.

If you remove a file than only the link from the path to that inode is removed and the counter is decreased by 1. If you have removed all links to that inode so the counter is zero the filesystem knows that it can override that block next time you save a new file.

First time you create a new backup with BIT all files will have an inode counter = 1.

backup0

path inode counter
fileA 1 1
fileB 2 1
fileC 3 1

Let's say you now change fileB, delete fileC and have a new fileD. BIT first makes hardlinks of all files. rsync than delete all hardlinks of files that has changed and copy the new files.

backup0

path inode counter
fileA 1 2
fileB 2 1
fileC 3 1

backup1

path inode counter
fileA 1 2
fileB 4 1
fileD 5 1

Now change fileB again and make a new backup

backup0

path inode counter
fileA 1 3
fileB 2 1
fileC 3 1

backup1

path inode counter
fileA 1 3
fileB 4 1
fileC 5 2

backup2

path inode counter
fileA 1 3
fileB 6 1
fileD 5 2

Finally smart-remove is going to remove backup0. All that is done by smart-remove is to rm -rf (force delete everything) the whole directory of backup0.

backup0 (no longer exist)

path inode counter
(empty) 1 2
(empty) 2 0
(empty) 3 0

backup1

path inode counter
fileA 1 2
fileB 4 1
fileD 5 2

backup2

path inode counter
fileA 1 2
fileB 6 1
fileD 5 2

fileA is still untouched, fileB is still available in two different versions and fileC is gone for good. The blocks on your hdd that stored the data for inode 2 and 3 can now get overridden.

I hope this will shed a light on the "magic" behind BIT. If it's even more confusing don't hesitate to ask ;)

How can I check if my backups are using hard-links?

Please compare the inodes of a file that definitely didn't change between two backups. For this open two terminals and cd into both backups directory. ls -lai will print a list where the first column is the inode which should be equal for the same file in both backups if the file didn't change and the backups are incremental. The third column is a counter (if the file is no directory) on how many hard-links exist for this inode. It should be >1. So if you took e.g. 3 backups it should be 3.

Don't be confused on the size of each backup. If you right click on preferences for a backup in a file manager and look for its size, it will look like they are all full backups (not incremental). But that's not (necessary) the case.

To get the correct size of each backups with respect on the hard-links you can run:

du -chd0 /media/<USER>/backintime/<HOST>/<USER>/1/*

Compare with option -l to count hardlinks multiple times:

du -chld0 /media/<USER>/backintime/<HOST>/<USER>/1/*

(ncdu isn't installed by default so I won't recommend using it)

How to use checksum to find corrupt files periodically?

Starting with BIT Version 1.0.28 there is a new command line option --checksum which will do the same as Use checksum to detect changes in Options. It will calculate checksums for both the source and the last backups files and will only use this checksum to decide whether a file has changed or not. The normal mode (without checksums) is to compare modification times and sizes of the files which is much faster to detect changed files.

Because this takes ages, you may want to use this only on Sundays or only the first Sunday per month. Please deactivate the schedule for your profile in that case. Then run crontab -e

For daily backups on 2AM and --checksum every Sunday add:

# min hour day month dayOfWeek command
0 2 * * 1-6 nice -n 19 ionice -c2 -n7 /usr/bin/backintime --backup-job >/dev/null 2>&1
0 2 * * Sun nice -n 19 ionice -c2 -n7 /usr/bin/backintime --checksum --backup-job >/dev/null 2>&1

For --checksum only at first Sunday per month add:

# min hour day month dayOfWeek command
0 2 * * 1-6 nice -n 19 ionice -c2 -n7 /usr/bin/backintime --backup-job >/dev/null 2>&1
0 2 * * Sun [ "$(date '+\%d')" -gt 7 ] && nice -n 19 ionice -c2 -n7 /usr/bin/backintime --backup-job >/dev/null 2>&1
0 2 * * Sun [ "$(date '+\%d')" -le 7 ] && nice -n 19 ionice -c2 -n7 /usr/bin/backintime --checksum --backup-job >/dev/null 2>&1

Press CTRL + O to save and CTRL + X to exit (if you editor is nano. Maybe different depending on your default text editor).

What is the meaning of the leading 11 characters (e.g. "cf...p.....") in my backup logs?

This are from rsync and indicating what changed and why. Please see the section --itemize-changes in the manpage of rsync. See also some rephrased explanations on Stack Overflow.

Backup "WITH ERRORS": [E] 'rsync' ended with exit code 23: See 'man rsync' for more details

BiT Version 1.4.0 (2023-09-14) introduced the evaluation of rsync exit codes for better error recognition:

Before this release rsync exit codes were ignored and only the backup files parsed for errors (which does not find each error, eg. dead symbolic links logged as symlink has no referent).

This "exit code 23" message may occur at the end of backup logs and BiT logs when rsync was not able to transfer some (or even all) files. See this comment in issue 1587 for a list all known reasons for rsync's exit code 23.

Currently you can ignore this error after checking the full backup log which error is hidden behind "exit code 23" (and possibly fix it - eg. delete or update dead symbolic links).

We plan to implement an improved handling of exit code 23 in the future (presumably by introducing warnings into the backup log).

What happens when I remove a backup?

Each backup is stored in a dated subdirectory of the "full backup path" shown in Settings. It contains a backup directory of all the files as well as a log of the backup's creation and some other details. Removing the backup removes this whole directory. Each backup is independent of the others, so other backups are not affected. However, the data of identical files is not stored redundantly by multiple backups, so removing a backup will only recover the space used by files that are unique to that backup.

How can I exclude cache folders to improve backup speed and reduce storage?

Why exclude cache folders?

Cache folders typically contain temporary files that are not necessary for backups. Excluding them can significantly improve backup speed and reduce storage usage.

How to exclude cache folders:

  1. Open Back in Time.

  2. Go to the Exclude Patterns settings:

    • Click the "Exclude" tab in the configuration window.
    • Click the Add button to create a new exclude pattern.
  3. Add the following patterns to exclude common cache directories:

    .var/app/**/[Cc]ache/
    .var/app/**/media_cache/
    .mozilla/firefox/**/cache/
    .config/BraveSoftware/Brave-Browser/Default/Service Worker/CacheStorage/
    

Explanation:

  • /**/ matches any directory structure leading to the specified folder.
  • [Cc]ache matches folder names with either uppercase or lowercase "Cache."
  1. Decide whether to include or exclude the folder itself:
    • To exclude only the folder’s content, use /* at the end of the pattern:
      .var/app/**/[Cc]ache/*
      
    • To exclude the folder and its contents, omit the /*:
      .var/app/**/[Cc]ache/
      

Tips for better results:

  • Check Backup Logs: After running a backup, review the logs to identify additional folders that may slow down the process. Example log entries for cache files:

    [E] Skipping file /path/to/cache/file: Too many small files.
    
  • Customize Patterns: Adjust the patterns to suit your specific applications. For example, modify paths for browsers or other software you use.

  • Test Exclude Patterns: Test your backup after adding patterns to ensure they work as intended.

How to use extended filesystem attributes (xattr) to exclude files/directories?

Please see Issue #817 for details.

Are Samba shares supported? / Does Samba support hard links?

There is no short answer to that. It depends on the configuration of the Samba server and the filesystem of the volume/harddisk it is using.

Generally it is not recommended to use Samba shares as backup destination. Use an SSH profile instead.

Further reading:

If you encounter clear rules about configuring Samba that it works with Back In Time in a reliable way, please let us know the details. We will than integrate it into the documentation.

How does Back in Time handle open or changed files during backup?

Explanation

Back In Time uses rsync to copy the files and directories specified to be backed up in the configuration. Rsync does not lock any files that are open or being modified and therefore the backup can be copied in an inconsistent state. Rsync only reads a file on time when it goes through it and as a result of this only some changes are captured by rsync. This can affect files such as logs, browser caches, databases or virtual machine images where inconsistencies can even lead to data corruption.

To reduce this risk, the following approaches can be considered:

  • Filesystem snapshots If using a filesystem like btrfs and ZFS that has a snapshot function this can be used together with Back in Time. Filesystem snapshots provide a read-only copy of a filesystem frozen at a specific point in time, which ensures data integrity even for open/changing files. Configure Back In Time to backup from this filesystem's read-only snapshot.

  • Use exclusions If the filesystem does not have filesystem snapshots available, one solution could be to exclude files that are frequently open or actively changing. The command lsof in GNU/Linux presents open files and the processes that opened them as a list. Use this list as base for configuring BIT exclusion list.

  • Application specific handling For applications that opens and modifies files frequently like databases or virtual machines, specific solutions may be needed. Use the databases own backup function to create a consistent copy and include that in the BIT backup. Virtual machines products typically have ability to create snapshots of their state, that can be included in BIT.

  • Choose when to perform backup Perform backup at times where less files are open, for example at night.

Restore

After Restore I have duplicates with extension ".backup.20131121"

This is because Backup files on restore in Options was enabled. This is the default setting to prevent overriding files on restore.

If you don't need them any more you can delete those files. Open a terminal and run:

find /path/to/files -regextype posix-basic -regex ".*\.backup\.[[:digit:]]\{8\}"

Check if this correctly listed all those files you want to delete and than run:

find /path/to/files -regextype posix-basic -regex ".*\.backup\.[[:digit:]]\{8\}" -delete

Back In Time doesn't find my old backups on my new Computer

Back In Time prior to version 1.1.0 had an option called Auto Host/User/Profile ID (hidden under General > Advanced) which will always use the current host- and username for the full backup path. When (re-)installing your computer you probably chose a different host name or username than on your old machine. With Auto Host/User/Profile ID activated Back In Time now try to find your backups under the new host- and username underneath the /path/to/backintime/ path.

The Auto Host/User/Profile ID option is gone in version 1.1.0 and above. It was totally confusing and didn't add any good.

You have three options to fix this:

  • Disable Auto Host/User/Profile ID and change Host and User to match your old machine.

  • Rename the backups path /path/to/backintime/OLDHOSTNAME/OLDUSERNAME/profile_id to match your new host- and username.

  • Upgrade to a more recent version of Back In Time (1.1.0 or above). The Auto Host/User/Profile ID option is gone and it also comes with an assistant to restore the config from an old backup on first start.

Schedule

How does the 'Repeatedly (anacron)' schedule work?

In fact Back In Time doesn't use anacron anymore. It was to inflexible. But that schedule mimics anacron.

BIT will create a crontab entry which will start backintime --backup-job every 15min (or once an hour if the schedule is set to weeks). With the --backup-job command, BIT will check if the profile is supposed to be run this time or exit immediately. For this it will read the time of the last successful run from ~/.local/share/backintime/anacron/ID_PROFILENAME. If this is older than the configured time, it will continue creating a backup.

If the backup was successful without errors, BIT will write the current time into ~/.local/share/backintime/anacron/ID_PROFILENAME (even if Repeatedly (anacron) isn't chosen). So, if there was an error, BIT will try again at the next quarter hour.

backintime --backup will always create a new backup. No matter how many time elapsed since last successful backup.

Will a scheduled backup run as soon as the computer is back on?

Depends on which schedule you choose:

  • the schedule Repeatedly (anacron) will use an anacron-like code. So if your computer is back on it will start the job if the given time is gone till last backup.

  • with When drive get connected (udev) Back In Time will start a backup as soon as you connect your drive ;-)

  • old fashion schedules like Every Day will use cron. This will only start a new backup at the given time. If your computer is off, no backup will be created.

If I edit my crontab and add additional entries, will that be a problem for BIT as long as I don't touch its entries? What does it look for in the crontab to find its own entries?

You can add your own crontab entries as you like. Back In Time will not touch them. It will identify its own entries by the comment line #Back In Time system entry, this will be edited by the gui: and the following command. You should not remove/change that line. If there are no automatic schedules defined Back In Time will add an extra comment line #Please don't delete these two lines, or all custom backintime entries are going to be deleted next time you call the gui options! which will prevent Back In Time to remove user defined schedules.

Can I use a systemd timer instead of cron?

While there is no support within Back In Time to directly create a systemd timer, users can create a user timer and service units. Templates are provided below. Optionally adjust the value for OnCalendar= with a valid setting. See man systemd.timer for more.

Timer:

# ~/.config/systemd/user/backintime-backup-job.timer
[Unit]
Description=Start a backintime backup once daily

[Timer]
OnCalendar=daily
AccuracySec=1m
Persistent=true

[Install]
WantedBy=timers.target

Service:

# ~/.config/systemd/user/backintime-backup-job.service
[Unit]
Description=Run backintime backup generation

[Service]
Type=oneshot
ExecStart=/usr/bin/nice -n19 /usr/bin/ionice -c2 -n7 /usr/bin/backintime backup-job

Problems, Errors & Solutions

OverflowError: Value 1702441408 out of range for UInt32

The Back In Time GUI crashes and this exception appears in its terminal output. Known to happen on restoring (#2084) and removing (#2192) of backups. Assuming it might happen also on creating backups.

The current hypothesis the problem was introduced or happens more often since the migration from PyQt version 5 to version 6 (BIT version 1.5.0).

The fix (PR #2099) was released with version 1.6.0. For users prior to this version, there is a tiny workaround described in that issue comment.

SettingsDialog object has no attribute cbCopyUnsafeLinks

Wenn adding a file or directory, that is in fact a symlink, to the Include Tab in the Manage profiles dialog, the BIT GUI crash and give the following error in the terminal.

Traceback (most recent call last):
  File "/usr/share/backintime/qt/manageprofiles/tab_include.py", line 185, in btn_include_add_clicked
    self._parent_dialog.cbCopyUnsafeLinks.isChecked() or
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'SettingsDialog' object has no attribute 'cbCopyUnsafeLinks'

Introduced in version 1.5.3. Fixed in 1.6.0. See issue #2279.

Workaround: Don't use a symlink but the linked target.

WARNING: A backup is already running

Back In Time uses signal files like worker<PID>.lock to avoid starting the same backup twice. Normally it is deleted as soon as the backup finishes. In some case something went wrong so that Back In Time was forcefully stopped without having the chance to delete this signal file.

Since Back In Time does only start a new backup job (for the same profile) if the signal file does not exist, such a file need to be deleted first. But before this is done manually, it must be ensured that Back In Time really is not running anymore. It can be ensured via

ps aux | grep -i backintime

If the output shows a running instance of Back In Time it must be waited until it finishes or killed via kill <process id>.

For more details see the developer documentation: Usage of control files (locks, flocks, logs and others)

Back in Time does not start and shows: The application is already running! (pid: 1234567)

This message occurs when Back In Time is either already running or did not finish regularly (e.g. due to a crash) and wasn't able to delete its application lock file.

Before deleting that file manually make sure no backintime process is running via ps aux | grep -i backintime. Otherwise, kill the process. After that look into the folder ~/.local/share/backintime for the file app.lock.pid and delete it.

For more details see the developer documentation: Usage of control files (locks, flocks, logs and others)

Switching to dark or light mode in the desktop environment is ignored by BIT

After restart Back In Time it should adapt to the desktops current used color theme.

It happens because Qt does not detect theme modifications out of the box. Workarounds are known, but generate a relatively large amount of code and in our opinion are not worth the effort.

Version >= 1.2.0 works very slow / Unchanged files are backed up

After updating to >= 1.2.0, BiT does a (nearly) full backup because file permissions are handled differently. Before 1.2.0 all destination file permissions were set to -rw-r--r--. In 1.2.0 rsync is executed with --perms option which tells rsync to preserve the source file permission. That's why so many files seem to be changed.

If you don't like the new behavior, you can use "Expert Options" -> "Paste additional options to rsync" to add the value --no-perms --no-group --no-owner in that field.

What happens if I hibernate the computer while a backup is running?

Back In Time will inhibit automatic suspend/hibernate while a backup/restore is running. If you manually force hibernate this will freeze the current process. It will continue as soon as you wake up the system again.

What happens if I power down the computer while a backup is running, or if a power outage happens?

This will kill the current process. The new backup will stay in new_snapshot folder. Depending on which state the process was while killing the next scheduled backup can continue the leftover new_snapshot or it will remove it first and start a new one.

What happens if there is not enough disk space for the current backup?

Back In Time will try to create a new backup but rsync will fail when there is not enough space. Depending on Continue on errors setting the failed backup will be kept and marked With Errors or it will be removed. By default, Back In Time will finally remove the oldest backups until there is more than 1 GiB free space again.

NTFS Compatibility

Although devices formatted with the NTFS file system can generally be used with Back In Time, there are some limitations to be aware of.

NTFS File systems do not support the following characters in filenames or directories:

< (less than)
> (greater than)
: (colon)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)

If Back In Time tries to copy files where the filename contains those character, an "Invalid argument (22)" error message will be displayed.

It is recommended that only devices formatted with Unix style file systems (such as ext4) be used.

For more information, refer to this Microsoft page.

GUI does not scale on high resolution or 4k monitors

The technical details are complex and many components of the operating system are involved. BIT itself is not involved and also not responsible for it. Several approaches might help:

  • Check your desktop environment or window manager for settings regarding scaling.
  • Because BIT is using Qt for its GUI, modifying the environment variable QT_SCALE_FACTOR or QT_AUTO_SCREEN_SCALE_FACTOR. See this article and Issue #1946 about more details.

Tray icon or other icons not shown correctly

Status: Fixed in v1.4.0

Missing installations of Qt-supported themes and icons can cause this effect. Back In Time may activate the wrong theme in this case leading to some missing icons. A fix for the next release is in preparation.

As clean solution, please check your Linux settings (Appearance, Styles, Icons) and install all themes and icons packages for your preferred style via your package manager.

See issues #1306 and #1364.

Non-working password safe and BiT forgets passwords (keyring backend issues)

Status: Fixed in v1.3.3 (mostly) and v1.4.0

Back in Time does only support selected "known-good" backends to set and query passwords from a user-session password safe by using the keyring library.

Enabling a supported keyring requires manual configuration of a configuration file until there is e.g. a settings GUI for this.

Symptoms are DEBUG log output (with the command line argument --debug) of keyring problems can be recognized by output like:

DEBUG: [common/tools.py:829 keyringSupported] No appropriate keyring found. 'keyring.backends...' can't be used with BackInTime
DEBUG: [common/tools.py:829 keyringSupported] No appropriate keyring found. 'keyring.backends.chainer' can't be used with BackInTime

To diagnose and solve this follow these steps in a terminal:

# Show default backend
python3 -c "import keyring.util.platform_; print(keyring.get_keyring().__module__)"

# List available backends:
keyring --list-backends

# Find out the config file folder:
python3 -c "import keyring.util.platform_; print(keyring.util.platform_.config_root())"

# Create a config file named "keyringrc.cfg" in this folder with one of the available backends (listed above)
[backend]
default-keyring=keyring.backends.kwallet.DBusKeyring

See also issue #1321

Outdated

Segmentation fault on Exit

This problem existed at least since version 1.2.1, and should hopefully be fixed with version 1.5.0. For all affected versions, it does not impact the functionality of Back In Time or jeopardize backup integrity. It can be safely ignored. But please report the error when encountered in version 1.5.0 or newer.

See also:

Incompatibility with rsync 3.2.4 or newer

Status: Fixed in v1.3.3

The release (1.3.2) and earlier versions of Back In Time are incompatible with rsync >= 3.2.4 (#1247).

If you use rsync >= 3.2.4 and backintime <= 1.3.2 there is a workaround. Add --old-args in Expert Options / Additional options to rsync. Note that some GNU/Linux distributions (e.g. Manjaro) using a workaround with environment variable RSYNC_OLD_ARGS in their distro-specific packages for Back In Time. In that case you may not see any problems.

Hardware-specific Setup

How to use BIT with an Ugreen NAS?

Please see this blogpost by George Ruinelli @caco3.

How to use QNAP QTS NAS with BIT over SSH

To use BackInTime over SSH with a QNAP NAS there is still some work to be done in the terminal.

WARNING: DON'T use the changes for sh suggested in man backintime. This will damage the QNAP admin account (and even more). Changing sh for another user doesn't make sense either because SSH only works with the QNAP admin account!

Please test this Tutorial and give some feedback!

  1. Activate the SSH prefix: PATH=/opt/bin:/opt/sbin:\$PATH in Expert Options

  2. Use admin (default QNAP admin) as remote user. Only this user can connect through SSH. Also activate on the QNAP SFTP on the SSH settings page.

  3. Path should be something like /share/Public/

  4. Create the public/private key pair for the password-less login with the user you use for BackInTime and copy the public key to the NAS.

    ssh-keygen -t rsa
    ssh-copy-id -i ~/.ssh/id_rsa.pub  <REMOTE_USER>@<HOST>

To fix the message about not supported find PATH -type f -exec you need to install Entware-ng. QNAPs QTS is based on Linux but some of its packages have limited functionalities. And so do some of the necessary ones for BackInTime.

Please follow this install instruction to install Entware-ng on your QNAP NAS.

Because there is no web interface yet for Entware-ng, you must configure it by SSH on the NAS.

Some Packages will be installed by default for example findutils.

Login on the NAS and updated the Database and Packages of Entware-ng with

ssh <REMOTE_USER>@<HOST>
opkg update
opkg upgrade

Finally install the current packages of bash, coreutils and rsync

opkg install bash coreutils rsync

Now the error message should be gone and you should be able to take a first backup with BackInTime.

BackInTime changes permissions on the backup path. The owner of the backup has read permission, other users have no access.

This way can change with newer versions of BackInTime or QNAPs QTS!

How to use Synology DSM 5 with BIT over SSH

Issue

BackInTime cannot use Synology DSM 5 directly because the SSH connection to the NAS refers to a different root file system than SFTP does. With SSH you access the real root, with SFTP you access a fake root (/volume1)

Solution

Mount /volume1/backups to /volume1/volume1/backups

Suggestion

DSM 5 isn't really up to date any more and might be a security risk. It is strongly advised to upgrade to DSM 6! Also the setup with DSM 6 is much easier!

  1. Make a new volume named volume1 (should already exist, else create it)

  2. Enable User Home Service (Control Panel / User)

  3. Make a new share named backups on volume1

  4. Make a new share named volume1 on volume1 (It must be the same name)

  5. Make a new user named backup

  6. Give to user backup rights Read/Write to share backups and volume1 and also permission for FTP

  7. Enable SSH (Control Panel / Terminal & SNMP / Terminal)

  8. Enable SFTP (Control Panel / File Service / FTP / SFTP)

  9. Enable rsync service (Control Panel / File Service / rsync)

  10. Since DSM 5.1: Enable Backup Service (Backup & Replication / Backup Service) (This seems not to be available/required anymore with DSM 6!)

  11. Log on as root by SSH

  12. Modify the shell of user backup. Set it to /bin/sh (vi /etc/passwd then navigate to the line that begins with backup, press :kbd:I to enter Insert Mode, replace /sbin/nologin with /bin/sh, then finally save and exit by pressing :kbd:ESC and type :wq followed by :kbd:Enter) This step might have to be repeated after a major update of the Synology DSM! Note: This is quite a dirty hack! It is suggested to upgrade to DSM 6 which doesn't need this any more!

  13. Make a new directory /volume1/volume1/backups

    mkdir /volume1/volume1/backups
  14. Mount /volume1/backups on /volume1/volume1/backups

    mount -o bind /volume1/backups /volume1/volume1/backups
  15. To auto-mount it make a script /usr/syno/etc/rc.d/S99zzMountBind.sh

    #!/bin/sh
    
     start()
     {
            /bin/mount -o bind /volume1/backups /volume1/volume1/backups
     }
    
     stop()
     {
            /bin/umount /volume1/volume1/backups
     }
    
     case "$1" in
            start) start ;;
            stop) stop ;;
            *) ;;
     esac

    Note: If the folder /usr/syno/etc/rc.d doesn't exist, check if /usr/local/etc/rc.d/ exists. If so, put it there. (After I updated to Synology DSM 6.0beta, the first one did not exist anymore). Make sure the execution flag of the file is checked , else it will not get run at start! To make it executable, run: chmod +x /usr/local/etc/rc.d/S99zzMountBind.sh

  16. On the workstation on which you try to use BIT make SSH keys for user backup, send the public key to the NAS

    ssh-keygen -t rsa -f ~/.ssh/backup_id_rsa
    ssh-add ~/.ssh/backup_id_rsa
    ssh-copy-id -i ~/.ssh/backup_id_rsa.pub backup@<synology-ip>
    ssh backup@<synology-ip>
  17. You might get the following error:

    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  18. If so, copy the public key manually to the NAS as root with

    scp ~/.ssh/id_rsa.pub backup@<synology-ip>:/var/services/homes/backup/
    ssh backup@<synology-ip> cat /var/services/homes/backup/id_rsa.pub >> /var/services/homes/backup/.ssh/authorized_keys
    # you'll still be asked for your password on these both commands
    # after this you should be able to login password-less
  19. And proceed with the next step

  20. If you are still prompted for your password when running ssh backup@<synology-ip>, check the permissions of the file /var/services/homes/backup/.ssh/authorized_keys. It should be -rw-------. If this is not the case, run the command

    ssh backup@<synology-ip> chmod 600 /var/services/homes/backup/.ssh/authorized_keys
  21. Now you can use BackInTime to perform your backup to your NAS with the user backup.

How to use Synology DSM 6 with BIT over SSH

  1. Enable User Home Service (Control Panel / User / Advanced). There is no need to create a volume since everything is stored in the home directory.

  2. Make a new user named backup (or use your existing account). Add this user to the user group Administrators. Without this, you will not be able to log in!

  3. Enable SSH (Control Panel / Terminal & SNMP / Terminal)

  4. Enable SFTP (Control Panel / File Service / FTP / SFTP)

  5. Since DSM 5.1: Enable Backup Service (Backup & Replication / Backup Service) (This seems not to be available/required anymore with DSM 6!) (Tests needed!)

  6. On DSM 6 you can edit the user-root-dir for sFTP: Control Panel -> File Services -> FTP -> General -> Advanced Settings -> Security Settings -> Change user root directories -> Select User. Now select the user backup and Change root directory to User home

  7. On the workstation on which you try to use BIT make SSH keys for user backup, send the public key to the NAS

     ssh-keygen -t rsa -f ~/.ssh/backup_id_rsa
     ssh-add ~/.ssh/backup_id_rsa
     ssh-copy-id -i ~/.ssh/backup_id_rsa.pub backup@<synology-ip>
     ssh backup@<synology-ip>
  8. You might get the following error:

     /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
     /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  9. If so, copy the public key manually to the NAS as root with

     scp ~/.ssh/id_rsa.pub backup@<synology-ip>:/var/services/homes/backup/
     ssh backup@<synology-ip> cat /var/services/homes/backup/id_rsa.pub >> /var/services/homes/backup/.ssh/authorized_keys
     # you'll still be asked for your password on these both commands
     # after this you should be able to login password-less
  10. And proceed with the next step

  11. If you are still prompted for your password when running ssh backup@<synology-ip>, check the permissions of the file /var/services/homes/backup/.ssh/authorized_keys. It should be -rw-------. If this is not the case, run the command

    ssh backup@<synology-ip> chmod 600 /var/services/homes/backup/.ssh/authorized_keys
  12. In BackInTime settings dialog leave the Path field empty

  13. Now you can use BackInTime to perform your backup to your NAS with the user backup.

Using a non-standard port

If you want to use the Synology NAS with non-standard SSH/SFTP port (standard is 22), you have to change the Port on total 3 places:

  1. Control Panel > Terminal: Port = <PORT_NUMBER>

  2. Control Panel > FTP > SFTP: Port = <PORT_NUMBER>

  3. Backup & Replication > Backup Services > Network Backup Destination: SSH encryption port = <PORT_NUMBER>

Only if all 3 of them are set to the same port, BackInTime is able to establish the connection. As a test, one can run the command

rsync -rtDHh --checksum --links --no-p --no-g --no-o --info=progress2 --no-i-r --rsh="ssh -p <PORT_NUMBER> -o IdentityFile=/home/<USER>/.ssh/id_rsa" --dry-run --chmod=Du+wx /tmp/<AN_EXISTING_FOLDER> "<USER_ON_DISKSTATION>@<SERVER_IP>:/volume1/Backups/BackinTime"

in a terminal (on the client PC).

How to use Synology DSM 7 with BIT over SSH

  1. Enable User Home Service (Control Panel > User & Group > Advanced).

  2. Make a new user named backup (or use your existing account) and add this user to the user group Administrators.

  3. Enable SSH (Control Panel > Terminal & SNMP > Terminal)

  4. Enable SFTP (Control Panel > File Services > FTP > SFTP)

  5. Enable rsync (Control Panel > File Services > rsync)

  6. Edit the user-root-directory for SFTP: Control Panel > File Services > FTP > General > Advanced Settings > Security Settings > Change user root directories > Select User > select the user backup > Edit and Change root directory to User home

  7. Make sure the 'homes' shared folder has the default permissions and that non-admin users and groups are not assigned Read or Write permissions on the 'homes' folder. The default permissions are described in this guide

  8. On the workstation on which you need to use BIT, make an SSH key pair for user backup, and send the public key to the NAS:

     ssh-keygen -t rsa -f ~/.ssh/backup_id_rsa
     ssh-copy-id -i ~/.ssh/backup_id_rsa.pub backup@<synology-ip>
     ssh backup@<synology-ip>
  9. Although not strictly necessary, Synology recommend setting the permissions for the .ssh directory and the authorized_keys file to 700, and 600 respectively:

    backup@NAS:~$ chmod 700 .ssh
    backup@NAS:~$ chmod 600 .ssh/authorized_keys
  10. In BackInTime settings dialog leave the Path field empty

  11. Now you can use BackInTime to perform your backup to your NAS with the user backup.

Using a non-standard SSH port with a Synology NAS

If you want to use the Synology NAS with a non-standard SSH/SFTP port as advised by the Security Advisor package, you have to change the Port in 3 places (the default port number for all three is 22):

  1. Control Panel > Terminal & SNMP > Terminal: Port = <PORT_NUMBER>

  2. Control Panel > File Services > FTP > SFTP: Port number = <PORT_NUMBER>

  3. Control Panel > File Services > rsync > SSH encryption port = <PORT_NUMBER>

Only if all 3 are set to the same port is BackInTime able to establish the connection (don't forget to set the new port number in the BIT profiles).

To sign in with ssh using the new port number:

ssh -p PORT_NUMBER backup@<synology-ip>

or, for convenience you can edit or create ~/.ssh/config with the following:

Host <synology-ip>
    Port PORT_NUMBER

and then use just:

ssh backup@<synology-ip>

"sshfs: No such file or directory" using BIT, but manually ssh with rsync works

The reason (known for DSM version 7) is that the setup of ssh and sftp is customized by Synology.

Solution (Screenshot in Issue #1674):

  1. Go to: Control Panel > File Services > Advanced Settings > Change user root directories > Select User
  2. Add the name of the user used for SSH on the Synology in that list.
  3. At Change root directory to: select User home.

See also

Synology: use different volume for backup

This was tested and related to Synology DSM version 7, but might work with other versions, too. Feel free to report back.

If you want to use a different volume as the destination for the backup use these additional steps:

  1. Follow all steps under **Howto (like create additional user in the example name of the user 'backup')

  2. Create in the Synology DSM GUI in Control panel a new shared folder name it "backup" for example

    Synology DSM7 Basic Setup

  3. Optional in step-2 Enable shared folder encryption (Depending on your needs, don't loose your encryption key) Advantage: backup folder (volume) is encrypted, even in case of theft of your Synology NAs Disadvantage: On each Reboot you need to mount the folder manually

    Synology DSM7 Additional Security Measure

  4. As user root or with sudo edit the file: /etc/passwd (Be careful, if you break it, you could break your NAS)

    • vi /etc/passwd
    • Edit the line for your user backup, so the home dir is on the newly created folder: backup:x:1038:100:Back in Time User:/volume1/backup:/bin/sh
  5. Continue with your normal setup of BIT

How to use Western Digital MyBook World Edition with BIT over ssh?

Device: WesternDigital MyBook World Edition (white light) version 01.02.14 (WD MBWE)

The BusyBox that is used by WD in MBWE for serving basic commands like cp (copy) doesn't support hardlinks. Which is a rudimentary function for BackInTime's way of creating incremental backups. As a work-around you can install Optware on the MBWE.

Before proceeding please make a backup of your MBWE. There is a significant chance to break your device and lose all your data. There is good documentation about Optware on http://mybookworld.wikidot.com/optware.

  1. You have to login to MBWE's web admin and change to Advanced Mode. Under System | Advanced you have to enable SSH Access. Now you can log in as root over ssh and install Optware (assuming <MBWE> is the address of your MyBook).

    Type in terminal:

     ssh root@<MBWE> #enter 'welc0me' for password (you should change this by typing 'passwd')
     wget http://mybookworld.wikidot.com/local--files/optware/setup-whitelight.sh
     sh setup-whitelight.sh
     echo 'export PATH=$PATH:/opt/bin:/opt/sbin' >> /root/.bashrc
     echo 'export PATH=/opt/bin:/opt/sbin:$PATH' >> /etc/profile
     echo 'PermitUserEnvironment yes' >> /etc/sshd_config
     /etc/init.d/S50sshd restart
     /opt/bin/ipkg install bash coreutils rsync nano
     exit
  2. Back in MBWE's web admin go to Users and add a new user (<REMOTE_USER> in this How-to) with Create User Private Share set to Yes.

    In terminal:

     ssh root@<MBWE>
     chown <REMOTE_USER> /shares/<REMOTE_USER>
     chmod 700 /shares/<REMOTE_USER>
     /opt/bin/nano /etc/passwd
     #change the line
     #<REMOTE_USER>:x:503:1000:Linux User,,,:/shares:/bin/sh
     #to
     #<REMOTE_USER>:x:503:1000:Linux User,,,:/shares/<REMOTE_USER>:/opt/bin/bash
     #save and exit by press CTRL+O and CTRL+X
     exit
  3. Next create the ssh-key for your local user. In the terminal

     ssh <REMOTE_USER>@<MBWE>
     mkdir .ssh
     chmod 700 .ssh
     echo 'PATH=/opt/bin:/opt/sbin:/usr/bin:/bin:/usr/sbin:/sbin' >> .ssh/environment
     exit
     ssh-keygen -t rsa #enter for default path
     ssh-add ~/.ssh/id_rsa
     scp ~/.ssh/id_rsa.pub <REMOTE_USER>@<MBWE>:./ #enter password from above
     ssh <REMOTE_USER>@<MBWE> #you will still have to enter your password
     cat id_rsa.pub >> .ssh/authorized_keys
     rm id_rsa.pub
     chmod 600 .ssh/*
     exit
     ssh <REMOTE_USER>@<MBWE> #this time you shouldn't been asked for password anymore
     exit
  4. You can test if everything is done by enter this

    ssh <REMOTE_USER>@<MBWE> cp --help

    The output should look like:

     Usage: cp [OPTION]... [-T] SOURCE DEST
     or: cp [OPTION]... SOURCE... DIRECTORY
     or: cp [OPTION]... -t DIRECTORY SOURCE...
     Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY.
    
     Mandatory arguments to long options are mandatory for short options too.
     -a, --archive same as -dR --preserve=all
         --backup[=CONTROL] make a backup of each existing destination file
     -b like --backup but does not accept an argument
         --copy-contents copy contents of special files when recursive
     ... (lot more lines with options)

    But if your output looks like below you are still using the BusyBox and will not be able to run backups with BackInTime over ssh:

     BusyBox v1.1.1 (2009.12.24-08:39+0000) multi-call binary
    
     Usage: cp [OPTION]... SOURCE DEST

Project & Contributing & more

Why do I need to introduce myself?

It helps maintainers understand who you are and how to communicate with you. This saves unnecessary effort on both sides by avoiding misunderstandings that could lead to rejected contributions. It also helps distinguish genuine contributors from accounts submitting low-quality or AI-generated changes merely to inflate commit statistics or stars, without real engagement in the project.

Here is a small suggestion and guidance for your introduction:

  • How long and in what way have you been using BIT?
  • What experience and skills do you have in software development?
  • What are your current learning goals?
  • How did you become aware of this issue?

Can I contribute without using the software?

No, in most cases. Contributors must be users of Back In Time. Real contributions require familiarity with the software, its behavior, and workflows. Real contributions come from real usage.

Can you assign this to me?

No. Don't ask. Comment with intent or a plan first. Otherwise its just noise. Your behavior disrespects contributors with real intent, and burden maintainers who work on this project in their free time. Don't waste our time.

Can I use @ mentions freely in issues or PRs?

No. Never. Avoid them in all cases. Mentions trigger notifications and create noise. Maintainers and subscribed contributors already see all activity.

Can I boost my commit count?

No. Doing that can get your account blocked or deleted, because mainters will report you to the abuse team of Microsoft. This project isn't for collecting stars or commits. Maybe watching Don't Contribute to Open Source will help you to understand and learn.

Can I submit AI-generated contributions?

No. AI-generated contributions are prohibited. Attempting this will be reported to Microsoft abuse team, and your account may be blocked or deleted.

Alternative installation options

Besides the repositories of the official GNU/Linux distributions, there are other alternative installation options provided and maintained by third parties. Use them at your own risk and please contact that third party maintainers if you encounter problems. Again: We strongly recommend not to use 3rd party repositories because of possible security issues.

Support for specific package formats (deb, rpm, Flatpack, AppImage, Snaps, PPA, …)

We assist and support other projects providing specific distribution packages. Thus, we suggest creating your own repository to manage and maintain such packages. It will be mentioned in our documentation as an alternative source for installation.

We do not directly support third-party distribution channels associated with specific GNU/Linux distributions, unofficial repositories (e.g. Arch AUR, Launchpad PPA) or FlatPack & Co. One reasons is our lack of resources and the need to prioritize tasks. Another reasons is that their are distro maintainers with much more experience and skills in packaging. We always recommend using the official repositories of GNU/Linux distributions and contacting their maintainers if Back In Time is unavailable or out dated.

Is BIT really not supported by Canonical Ubuntu?

Ubuntu consists of several repositories, each offering different levels of support. The main repository is maintained by Canonical and receives regular security updates and bug fixes throughout the 5-year support period of LTS releases.

In contrast, the universe repository is community-managed, meaning security updates and bug fixes are not guaranteed and depend heavily on community activity and volunteers. Therefore, packages in universe may not always be up-to-date with the same but well-maintained packages in Debian GNU/Linux and might miss important fixes.

Back In Time is one such package in the universe repository. That package is copied from the Debian GNU/Linux repository. It can be said that Back In Time is not maintained by Canonical Ubuntu, but by volunteers from the Community of Ubuntu.

Move project to alternative code hoster (e.g. Codeberg, GitLab, …)

We also believe that staying with Microsoft GitHub is not a good idea. Microsoft GitHub does not offer any exclusive feature for our project that another hoster could not also provide. But a migration is a matter of time and resources we currently do not have. But it is on our list. And with the current state of discussion we seem to target Codeberg.org.

For more details please see this thread on the mailing list.

How to review a Pull Request

Reviewing a Pull Request (PR) isn’t just about the code—it’s also about functionality. Changes can be tested by installing Back In Time and trying them out, even without reading the code. This allows issues to be identified from a user’s perspective. A second pair of eyes helps catch errors, spot overlooked issues, and improve overall quality. Fresh perspectives, knowledge sharing, and better maintainability contribute to the long-term stability of the project.

Check PRs labeled with PR: Waiting for review. Checking the milestone assigned to PR can also help gauge their priority and urgency.

  • Start by carefully reading the PR description to understand the proposed changes. Ask back if something is not clear.
  • When giving feedback, consider the contributor’s level of experience and skills. Keep it polite and constructive—every beginner could be a future maintainer.

To test functionality, check out the PR code locally on a virtual machine or your local machine. Running Back In Time in a test environment provides insights, that can be shared as findings, observations, or suggestions for improvement.

About code review:

  • Code should follow project standards and be structured for long-term maintainability.
  • Is a PR too large or complex, suggest to breaking it down into smaller parts.
  • How is the documentation?
  • Are there unit tests?
  • Does the changelog need an entry?

Testing & Building

SSH related tests are skipped

They get skipped if no SSH server is available. Please see section Testing & Building about how to setup a SSH server on your system.

Setup SSH Server to run unit tests

Please see section Testing - SSH.