YUM4/DNF compatibility via yum action plugin (#44322)

* YUM4/DNF compatibility via yum action plugin

DNF does not natively support allow_downgrade as an option, instead
that is always the default (not configurable by the administrator)
so it had to be implemented

 - Fixed group actions in check mode to report correct changed state
 - Better error handling for depsolve and transaction errors in DNF
 - Fixed group action idempotent transactions
 - Add use_backend to yum module/action plugin
 - Fix dnf handling of autoremove (didn't used to work nor had a
   default value specified, now does work and matches default
   behavior of yum)
 - Enable installroot tests for yum4(dnf) integration testing, dnf
   backend now supports that
 - Switch from zip to bc for certain package install/remove test
   cases in yum integration tests. The dnf depsolver downgrades
   python when you uninstall zip which alters the test environment
   and we have no control over that.
 - Add changelog fragment
 - Return a pkg_mgr fact if it was not previously set.
This commit is contained in:
Adam Miller 2018-08-27 12:17:47 -05:00 committed by Toshio Kuratomi
parent 9ff20521d1
commit 397febd343
12 changed files with 826 additions and 247 deletions

View file

@ -0,0 +1,22 @@
---
major_changes:
- yum and dnf modules now at feature parity
- new yum action plugin enables the yum module to work with both yum3
and dnf-based yum4 by detecting the backend package manager and routing
commands through the correct Ansible module for that python API
- New yumdnf module defines the shared argument specification for both
yum and dnf modules and provides an entry point to share code when
applicable
minor_changes:
- Fixed group actions in check mode to report correct changed state
- Better error handling for depsolve and transaction errors in DNF
- Fixed group action idempotent transactions in dnf backend
- Add use_backend to yum module/action plugin
- Fix dnf handling of autoremove to be compatible with yum
- Enable installroot tests for yum4(dnf) integration testing, dnf
backend now supports that
- Switch from zip to bc for certain package install/remove test
cases in yum integration tests. The dnf depsolver downgrades
python when you uninstall zip which alters the test environment
and we have no control over that.

View file

@ -159,8 +159,14 @@ options:
version_added: "2.7"
allow_downgrade:
description:
- This is effectively a no-op in DNF as it is the default behavior of dnf, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: False
version_added: "2.7"
@ -240,6 +246,7 @@ EXAMPLES = '''
'''
import os
import re
import tempfile
try:
@ -255,7 +262,7 @@ except ImportError:
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.six import PY2
from ansible.module_utils.six import PY2, text_type
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
@ -276,6 +283,115 @@ class DnfModule(YumDnf):
self._ensure_dnf()
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
# Added for YUM3/YUM4 compat
if package.repoid == 'installed':
result['yumstate'] = 'installed'
else:
result['yumstate'] = 'available'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def fetch_rpm_from_url(self, spec):
# FIXME: Remove this once this PR is merged:
# https://github.com/ansible/ansible/pull/19172
@ -287,14 +403,20 @@ class DnfModule(YumDnf):
try:
rsp, info = fetch_url(self.module, spec)
if not rsp:
self.module.fail_json(msg="Failure downloading %s, %s" % (spec, info['msg']))
self.module.fail_json(
msg="Failure downloading %s, %s" % (spec, info['msg']),
results=[],
)
data = rsp.read(BUFSIZE)
while data:
package_file.write(data)
data = rsp.read(BUFSIZE)
package_file.close()
except Exception as e:
self.module.fail_json(msg="Failure downloading %s, %s" % (spec, to_native(e)))
self.module.fail_json(
msg="Failure downloading %s, %s" % (spec, to_native(e)),
results=[],
)
return package_file.name
@ -308,7 +430,8 @@ class DnfModule(YumDnf):
if self.module.check_mode:
self.module.fail_json(
msg="`{0}` is not installed, but it is required"
"for the Ansible dnf module.".format(package)
"for the Ansible dnf module.".format(package),
results=[],
)
self.module.run_command(['dnf', 'install', '-y', package], check_rc=True)
@ -323,7 +446,8 @@ class DnfModule(YumDnf):
except ImportError:
self.module.fail_json(
msg="Could not import the dnf python module. "
"Please install `{0}` package.".format(package)
"Please install `{0}` package.".format(package),
results=[],
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
@ -356,7 +480,7 @@ class DnfModule(YumDnf):
# Set disable_excludes
if self.disable_excludes:
conf.disable_excludes = [self.disable_excludes]
conf.disable_excludes.append(self.disable_excludes)
# Set releasever
if self.releasever is not None:
@ -374,10 +498,15 @@ class DnfModule(YumDnf):
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file)
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Read the configuration file
conf.read()
@ -412,22 +541,6 @@ class DnfModule(YumDnf):
base.update_cache()
return base
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
return result
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
@ -449,14 +562,138 @@ class DnfModule(YumDnf):
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(results=results)
self.module.exit_json(msg="", results=results)
def _mark_package_install(self, pkg_spec):
"""Mark the package for install."""
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_group_installed(self, group):
"""
Check if a group is installed (the sum of the package set that makes up a group)
This is necessary until the upstream dnf API bug is fixed where installing
a group via the dnf API doesn't actually mark the group as installed
https://bugzilla.redhat.com/show_bug.cgi?id=1620324
"""
pkg_set = []
dnf_group = self.base.comps.group_by_pattern(group)
try:
self.base.install(pkg_spec)
except dnf.exceptions.MarkingError:
self.module.fail_json(msg="No package {0} available.".format(pkg_spec))
if dnf_group:
for pkg_type in dnf.const.GROUP_PACKAGE_TYPES:
for pkg in getattr(dnf_group, '{0}_packages'.format(pkg_type)):
pkg_set.append(pkg.name)
except AttributeError as e:
self.module.fail_json(
msg="Error attempting to determine package group installed status: {0}".format(group),
results=[],
rc=1,
failures=[to_native(e), ],
)
for pkg in pkg_set:
if not self._is_installed(pkg):
return False
return True
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
elif not self.allow_downgrade and is_newer_version_installed:
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
elif not is_newer_version_installed:
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else:
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
return {'failed': False, 'msg': 'Installed: {0}'.format(pkg_spec), 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _parse_spec_group_file(self):
pkg_specs, grp_specs, filenames = [], [], []
@ -472,29 +709,75 @@ class DnfModule(YumDnf):
return pkg_specs, grp_specs, filenames
def _update_only(self, pkgs):
installed = self.base.sack.query().installed()
not_installed = []
for pkg in pkgs:
if installed.filter(name=pkg):
self.base.package_upgrade(pkg)
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occured attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
self.base.package_install(pkg)
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def ensure(self):
allow_erasing = False
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failures = []
allow_erasing = False
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
@ -503,7 +786,11 @@ class DnfModule(YumDnf):
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
self.base.upgrade_all()
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, filenames = self._parse_spec_group_file()
if group_specs:
@ -523,48 +810,72 @@ class DnfModule(YumDnf):
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec))
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install groups.
for group in groups:
try:
self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if self._is_group_installed(group):
response['results'].append("Group {0} already installed.".format(group))
else:
self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failures.append((group, to_native(e)))
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failures.append((environment, to_native(e)))
failure_response['failures'].append(" ".join((environment, to_native(e))))
# Install packages.
if self.update_only:
self._update_only(pkg_specs)
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
self._mark_package_install(pkg_spec)
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(install_result['failure'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if not self.update_only:
# If not already installed, try to install.
self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failures.append((group, to_native(e)))
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
@ -573,29 +884,32 @@ class DnfModule(YumDnf):
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failures.append((environment, to_native(e)))
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
self._update_only(pkg_specs)
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
try:
self.base.install(pkg_spec)
except dnf.exceptions.MarkingError as e:
failures.append((pkg_spec, to_native(e)))
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(install_result['failure'])
else:
# state == absent
if self.autoremove:
self.base.conf.clean_requirements_on_remove = self.autoremove
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.")
msg="Cannot remove paths -- please specify package name.",
results=[],
)
for group in groups:
try:
@ -604,6 +918,24 @@ class DnfModule(YumDnf):
# Group is already uninstalled.
pass
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
if self._is_group_installed(group):
dnf_group = self.base.comps.group_by_pattern(group)
try:
if dnf_group:
for pkg_type in dnf.const.GROUP_PACKAGE_TYPES:
for pkg_spec in getattr(dnf_group, '{0}_packages'.format(pkg_type)):
self.base.remove(pkg_spec.name)
except AttributeError as e:
self.module.fail_json(
msg="Error attempting to determine package group installed status: {0}".format(group),
results=[],
rc=1,
failures=[to_native(e), ],
)
for environment in environments:
try:
self.base.environment_remove(environment)
@ -623,45 +955,57 @@ class DnfModule(YumDnf):
if self.autoremove:
self.base.autoremove()
if not self.base.resolve(allow_erasing=allow_erasing):
if failures:
self.module.fail_json(
msg='Failed to install some of the specified packages',
failures=failures
)
self.module.exit_json(msg="Nothing to do")
else:
if self.module.check_mode:
if failures:
self.module.fail_json(
msg='Failed to install some of the specified packages',
failures=failures
)
self.module.exit_json(changed=True)
try:
if not self.base.resolve(allow_erasing=allow_erasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
try:
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(msg="Failed to download packages: {0}".format(to_text(e)))
response = {'changed': True, 'results': []}
if self.download_only:
for package in self.base.transaction.install_set:
response['results'].append("Downloaded: {0}".format(package))
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
self.base.do_transaction()
for package in self.base.transaction.install_set:
response['results'].append("Installed: {0}".format(package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
response['changed'] = True
if self.module.check_mode:
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages',
self.module.fail_json(**failure_response)
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
if failures:
self.module.fail_json(
msg='Failed to install some of the specified packages',
failures=failures
)
self.module.exit_json(**response)
try:
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
if self.download_only:
for package in self.base.transaction.install_set:
response['results'].append("Downloaded: {0}".format(package))
self.module.exit_json(**response)
else:
self.base.do_transaction()
for package in self.base.transaction.install_set:
response['results'].append("Installed: {0}".format(package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages',
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
@ -673,9 +1017,15 @@ class DnfModule(YumDnf):
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__)
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.state not in ["absent", None]:
self.module.fail_json(msg="Autoremove should be used alone or with state=absent")
self.module.fail_json(
msg="Autoremove should be used alone or with state=absent",
results=[],
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happend
@ -688,12 +1038,15 @@ class DnfModule(YumDnf):
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.module, self.list)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(msg="This command has to be run under the root user.")
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
@ -722,7 +1075,7 @@ def main():
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.exit_json(msg="Failed to synchronize repodata: {0}".format(de))
module.exit_json(msg="Failed to synchronize repodata: {0}".format(to_native(de)))
if __name__ == '__main__':

View file

@ -23,6 +23,16 @@ description:
- Installs, upgrade, downgrades, removes, and lists packages and groups with the I(yum) package manager.
- This module only works on Python 2. If you require Python 3 support see the M(dnf) module.
options:
use_backend:
description:
- This module supports C(yum) (as it always has), this is known as C(yum3)/C(YUM3)/C(yum-deprecated) by
upstream yum developers. As of Ansible 2.7+, this module also supports C(YUM4), which is the
"new yum" and it has an C(dnf) backend.
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
required: false
default: "auto"
choices: [ auto, yum, yum4, dnf ]
version_added: "2.7"
name:
description:
- A package name or package specifier with version, like C(name-1.0).
@ -1501,6 +1511,8 @@ def main():
# list=repos
# list=pkgspec
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'yum', 'yum4', 'dnf'])
module = AnsibleModule(
**yumdnf_argument_spec
)

View file

@ -0,0 +1,100 @@
# (c) 2018, Ansible Project
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class ActionModule(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
'''
Action plugin handler for yum3 vs yum4(dnf) operations.
Enables the yum module to use yum3 and/or yum4. Yum4 is a yum
command-line compatibility layer on top of dnf. Since the Ansible
modules for yum(aka yum3) and dnf(aka yum4) call each of yum3 and yum4's
python APIs natively on the backend, we need to handle this here and
pass off to the correct Ansible module to execute on the remote system.
'''
self._supports_check_mode = True
self._supports_async = True
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# Carry-over concept from the package action plugin
module = self._task.args.get('use_backend', "auto")
if module == 'auto':
try:
if self._task.delegate_to: # if we delegate, we should use delegated host's facts
module = self._templar.template("{{hostvars['%s']['ansible_facts']['pkg_mgr']}}" % self._task.delegate_to)
else:
module = self._templar.template("{{ansible_facts.pkg_mgr}}")
except Exception:
pass # could not get it from template!
if module not in ["yum", "yum4", "dnf"]:
facts = self._execute_module(module_name="setup", module_args=dict(filter="ansible_pkg_mgr", gather_subset="!all"), task_vars=task_vars)
display.debug("Facts %s" % facts)
module = facts.get("ansible_facts", {}).get("ansible_pkg_mgr", "auto")
if (not self._task.delegate_to or self._task.delegate_facts) and module != 'auto':
result['ansible_facts'] = {'pkg_mgr': module}
if module != "auto":
if module == "yum4":
module = "dnf"
if module not in self._shared_loader_obj.module_loader:
result.update({'failed': True, 'msg': "Could not find a yum module backend for %s." % module})
else:
# run either the yum (yum3) or dnf (yum4) backend module
new_module_args = self._task.args.copy()
if 'use_backend' in new_module_args:
del new_module_args['use_backend']
display.vvvv("Running %s as the backend for the yum action plugin" % module)
result.update(self._execute_module(module_name=module, module_args=new_module_args, task_vars=task_vars, wrap_async=self._task.async_val))
# Now fall through to cleanup
else:
result.update(
{
'failed': True,
'msg': ("Could not detect which major revision of yum is in use, which is required to determine module backend.",
"You can manually specify use_backend to tell the module whether to use the yum (yum3) or dnf (yum4) backend})"),
}
)
# Now fall through to cleanup
# Cleanup
if not self._task.async_val:
# remove a temporary path we created
self._remove_tmp_path(self._connection._shell.tmpdir)
return result

View file

@ -301,6 +301,11 @@
- "'changed' in dnf_result"
- "'msg' in dnf_result"
- name: verify that bc is not installed
dnf:
name: bc
state: absent
- name: install the group again but also with a package that is not yet installed
dnf:
name:

View file

@ -87,6 +87,7 @@
dnf:
name: "{{ repodir }}/foo-1.0-1.{{ ansible_architecture }}.rpm"
state: present
allow_downgrade: True
register: dnf_result
- name: Check foo with rpm

View file

@ -42,7 +42,7 @@
mode: 0755
- name: Create RPMs and put them into a repo
shell: "python /tmp/create-repo.py {{ ansible_architecture }}"
shell: "{{ansible_python_interpreter}} /tmp/create-repo.py {{ ansible_architecture }}"
register: repo
- set_fact:
@ -56,7 +56,7 @@
gpgcheck: no
- name: Create RPMs and put them into a repo (i686)
shell: "python /tmp/create-repo.py i686"
shell: "{{ansible_python_interpreter}} /tmp/create-repo.py i686"
register: repo_i686
- set_fact:
@ -70,7 +70,7 @@
gpgcheck: no
- name: Create RPMs and put them into a repo (ppc64)
shell: "python /tmp/create-repo.py ppc64"
shell: "{{ansible_python_interpreter}} /tmp/create-repo.py ppc64"
register: repo_ppc64
- set_fact:

View file

@ -46,8 +46,8 @@
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and ansible_distribution_major_version|int <= 6
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- ansible_python.version.major == 2
# DNF1 doesn't handle downgrade operations properly (Fedora < 26)
- block:
- include: 'repo.yml'
always:
@ -58,16 +58,10 @@
- command: yum clean metadata
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- ansible_python.version.major == 2
# We can't run yum --installroot tests on dnf systems. Dnf systems revert to
# yum-deprecated, and yum-deprecated refuses to run if yum.conf exists
# so we cannot configure yum-deprecated correctly in an empty /tmp/fake.root/
# It will always run with $releasever unset
- include: 'yuminstallroot.yml'
when:
- (ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] or (ansible_distribution in ['Fedora'] and ansible_distribution_major_version|int < 23))
- ansible_python.version.major == 2
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
# el6 has a broken yum group implementation, when you try to remove a group it goes through
# deps and ends up with trying to remove yum itself and the whole process fails
@ -75,4 +69,3 @@
- include: 'yum_group_remove.yml'
when:
- (ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and ansible_distribution_major_version|int > 6) or ansible_distribution in ['Fedora']
- ansible_python.version.major == 2

View file

@ -73,10 +73,11 @@
name: foo
state: absent
# ============================================================================
- name: Install 1:foo-1.0-2
- name: Downgrade foo
yum:
name: "1:foo-1.0-2.{{ ansible_architecture }}"
name: foo-1.0-1
state: present
allow_downgrade: yes
register: yum_result
- name: Check foo with rpm
@ -87,30 +88,7 @@
assert:
that:
- "yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- name: Verify yum module outputs
assert:
that:
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
# ============================================================================
- name: Install foo-1.0-2 again
yum:
name: foo-1.0-2
state: present
register: yum_result
- name: Check foo with rpm
shell: rpm -q foo
register: rpm_result
- name: Verify installation
assert:
that:
- "not yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- "rpm_result.stdout.startswith('foo-1.0-1')"
- name: Verify yum module outputs
assert:
@ -279,30 +257,6 @@
- "not yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- name: Verify yum module outputs
assert:
that:
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
# ============================================================================
- name: Downgrade foo
yum:
name: foo-1.0-1
state: present
allow_downgrade: yes
register: yum_result
- name: Check foo with rpm
shell: rpm -q foo
register: rpm_result
- name: Verify installation
assert:
that:
- "yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-1')"
- name: Verify yum module outputs
assert:
that:
@ -506,3 +460,99 @@
yum:
name: foo
state: absent
# FIXME: dnf currently doesn't support epoch as part of it's pkg_spec for
# finding install candidates
# https://bugzilla.redhat.com/show_bug.cgi?id=1619687
- block:
- name: Install 1:foo-1.0-2
yum:
name: "1:foo-1.0-2.{{ ansible_architecture }}"
state: present
register: yum_result
- name: Check foo with rpm
shell: rpm -q foo
register: rpm_result
- name: Verify installation
assert:
that:
- "yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- name: Verify yum module outputs
assert:
that:
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
always:
- name: Clean up
yum:
name: foo
state: absent
when: ansible_pkg_mgr == 'yum'
# DNF1 (Fedora < 26) had some issues:
# - did not accept architecture tag as valid component of a package spec unless
# installing a file (i.e. can't search the repo)
# - doesn't handle downgrade transactions via the API properly, marks it as a
# conflict
#
# NOTE: Both DNF1 and Fedora < 26 have long been EOL'd by their respective
# upstreams
- block:
# ============================================================================
- name: Install foo-1.0-2
yum:
name: "foo-1.0-2.{{ ansible_architecture }}"
state: present
register: yum_result
- name: Check foo with rpm
shell: rpm -q foo
register: rpm_result
- name: Verify installation
assert:
that:
- "yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- name: Verify yum module outputs
assert:
that:
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
- name: Install foo-1.0-2 again
yum:
name: foo-1.0-2
state: present
register: yum_result
- name: Check foo with rpm
shell: rpm -q foo
register: rpm_result
- name: Verify installation
assert:
that:
- "not yum_result.changed"
- "rpm_result.stdout.startswith('foo-1.0-2')"
- name: Verify yum module outputs
assert:
that:
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
always:
- name: Clean up
yum:
name: foo
state: absent
when: not (ansible_distribution == "Fedora" and ansible_distribution_major_version|int < 26)

View file

@ -477,6 +477,11 @@
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.7.4.1-2.el7.noarch.rpm
when: ansible_python.version.major == 2
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.9.2-1.fc28.noarch.rpm
when: ansible_python.version.major == 3
# setup end
- name: download an rpm
@ -566,7 +571,6 @@
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- "'Failed to get nevra information from RPM package' in no_nevra_info_result.msg"
- name: Delete a temp RPM file
file:
@ -583,102 +587,131 @@
yum_version: "{%- if item.yumstate == 'installed' -%}{{ item.version }}{%- else -%}{{ yum_version }}{%- endif -%}"
with_items: "{{ yum_version.results }}"
- name: check whether yum supports disableexcludes (>= 3.4)
set_fact:
supports_disable_excludes: "{{ yum_version is version_compare('3.4.0', '>=') }}"
- block:
- name: check whether yum supports disableexcludes (>= 3.4)
set_fact:
supports_disable_excludes: "{{ yum_version is version_compare('3.4.0', '>=') }}"
when: ansible_pkg_mgr == "yum"
- name: uninstall zip
yum: name=zip state=removed
- name: unset disableexcludes tests for dnf(yum4) backend temporarily
set_fact:
supports_disable_excludes: True
when: ansible_pkg_mgr == "dnf"
- name: check zip with rpm
shell: rpm -q zip
ignore_errors: True
register: rpm_zip_result
- name: uninstall bc
yum: name=bc state=removed
- name: verify zip is uninstalled
assert:
that:
- "rpm_zip_result is failed"
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: exclude zip
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=)(.)*
line: "exclude=zip*"
state: present
- name: verify bc is uninstalled
assert:
that:
- "rpm_bc_result is failed"
# begin test case where disable_excludes is supported
- name: Try install zip without disable_excludes
yum: name=zip state=latest
register: yum_zip_result
ignore_errors: True
when: supports_disable_excludes
- name: exclude bc (yum backend)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=)(.)*
line: "exclude=bc*"
state: present
when: ansible_pkg_mgr == 'yum'
- name: verify zip did not install because it is in exclude list
assert:
that:
- "yum_zip_result is failed"
when: supports_disable_excludes
- name: exclude bc (dnf backend)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=)(.)*
line: "excludepkgs=bc*"
state: present
when: ansible_pkg_mgr == 'dnf'
- name: install zip with disable_excludes
yum: name=zip state=latest disable_excludes=all
register: yum_zip_result_using_excludes
when: supports_disable_excludes
# begin test case where disable_excludes is supported
- name: Try install bc without disable_excludes
yum: name=bc state=latest
register: yum_bc_result
ignore_errors: True
when: supports_disable_excludes
- name: verify zip did install using disable_excludes=all
assert:
that:
- "yum_zip_result_using_excludes is success"
- "yum_zip_result_using_excludes is changed"
- "yum_zip_result_using_excludes is not failed"
when: supports_disable_excludes
- name: verify bc did not install because it is in exclude list
assert:
that:
- "yum_bc_result is failed"
when: supports_disable_excludes
- name: remove exclude zip (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=zip*)
line: "exclude="
state: present
when: supports_disable_excludes
# end test case where disable_excludes is supported
- name: install bc with disable_excludes
yum: name=bc state=latest disable_excludes=all
register: yum_bc_result_using_excludes
when: supports_disable_excludes
# begin test case where disable_excludes is not supported
- name: Try install zip with disable_excludes
yum: name=zip state=latest disable_excludes=all
register: yum_fail_zip_result_old_yum
ignore_errors: True
when: not supports_disable_excludes
- name: verify bc did install using disable_excludes=all
assert:
that:
- "yum_bc_result_using_excludes is success"
- "yum_bc_result_using_excludes is changed"
- "yum_bc_result_using_excludes is not failed"
when: supports_disable_excludes
- name: verify packages did not install because yum version is unsupported
assert:
that:
- "yum_fail_zip_result_old_yum is failed"
when: not supports_disable_excludes
- name: remove exclude bc (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=bc*)
line: "exclude="
state: present
when: supports_disable_excludes and (ansible_pkg_mgr == 'yum')
- name: verify yum module outputs
assert:
that:
- "'is available in yum version 3.4 and onwards.' in yum_fail_zip_result_old_yum.msg"
when: not supports_disable_excludes
- name: remove exclude bc (cleanup dnf.conf)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=bc*)
line: "excludepkgs="
state: present
when: ansible_pkg_mgr == 'dnf'
# end test case where disable_excludes is supported
- name: remove exclude zip (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=zip*)
line: "exclude="
state: present
when: not supports_disable_excludes
# begin test case where disable_excludes is not supported
- name: Try install bc with disable_excludes
yum: name=bc state=latest disable_excludes=all
register: yum_fail_bc_result_old_yum
ignore_errors: True
when: not supports_disable_excludes
- name: install zip (bring test env in same state as when testing started)
yum: name=zip state=latest
register: yum_zip_result_old_yum
when: not supports_disable_excludes
- name: verify packages did not install because yum version is unsupported
assert:
that:
- "yum_fail_bc_result_old_yum is failed"
when: not supports_disable_excludes
- name: verify zip installed
assert:
that:
- "yum_zip_result_old_yum is success"
- "yum_zip_result_old_yum is changed"
- "yum_zip_result_old_yum is not failed"
when: not supports_disable_excludes
# end test case where disable_excludes is not supported
- name: verify yum module outputs
assert:
that:
- "'is available in yum version 3.4 and onwards.' in yum_fail_bc_result_old_yum.msg"
when: not supports_disable_excludes
- name: remove exclude bc (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=bc*)
line: "exclude="
state: present
when: not supports_disable_excludes and ansible_pkg_mgr == 'yum'
- name: install bc (bring test env in same state as when testing started)
yum: name=bc state=latest
register: yum_bc_result_old_yum
when: not supports_disable_excludes
- name: verify bc installed
assert:
that:
- "yum_bc_result_old_yum is success"
- "yum_bc_result_old_yum is changed"
- "yum_bc_result_old_yum is not failed"
when: not supports_disable_excludes and ansible_pkg_mgr == "yum"
# end test case where disable_excludes is not supported
# Fedora < 26 has a bug in dnf where package excludes in dnf.conf aren't
# actually honored and those releases are EOL'd so we have no expectation they
# will ever be fixed
when: not ((ansible_distribution == "Fedora") and (ansible_distribution_major_version|int < 26))

View file

@ -5,6 +5,16 @@
with_items:
- "@Development Tools"
- yum-utils
when: ansible_pkg_mgr == "yum"
- name: install a group to test and dnf-utils
yum:
name: "{{ item }}"
state: present
with_items:
- "@Development Tools"
- dnf-utils
when: ansible_pkg_mgr == "dnf"
- name: check mode remove the group
yum:

View file

@ -36,7 +36,7 @@
assert:
that:
- lib_result.failed
- "lib_result.msg=='No package libbdplus available.'"
- "lib_result.msg=='Failed to install some of the specified packages'"
- name: re-add rpmfusion
yum_repository: