Henrik's thoughts on life in IT, data and information management, cloud computing, cognitive computing, covering IBM Db2, IBM Cloud, Watson, Amazon Web Services, Microsoft Azure and more.
Thursday, November 16, 2023
Db2 11.5.9 is available
Thursday, July 1, 2021
Db2 11.5.6 is available
You probably already have noticed that a new release of Db2 for Linux, UNIX, and Windows is available, Db2 11.5.6. You can download the Db2 Fix Pack via the usual support site. The related documentation highlights the following features:
- Improved high availability with Advanced Log Space Management,
- Graph modeling and analysis of Db2 data using IBM Db2 Graph,
- Restrictions lifted on accessing column-organized tables
- Technical preview update to Machine Learning Optimizer
- New Click-to-Containerize utility
Aside from the highlights page, I usually go over the enhancements by category. Here are my personal highlights:
- The SQL enhancements include new NOWAIT and WAIT clauses for SELECT, UPDATE and DELETE statements. You can now specify at the statement level how many seconds to wait for locks.
- High availability, backup, resiliency, and recovery enhancements lists improvement to ADMIN_MOVE_TABLE and also has news on pacemaker.
- And in terms of Workload management enhancements, you can now enable the Adapte Workload Manager.
I am sure that we are going to learn all the details at the IDUG EMEA 2021 conference in Edinburgh, Scotland, in October. Mark your calendars.
Thursday, November 19, 2020
New Db2 V11.5 Mod Pack 5
During the currently ongoing virtual IDUG EMEA 2020 conference IBM released Mod Pack 5 for Db2 11.5. As it is with modification packs, it brings a long list of new features and enhancements to the current version of Db2. You can download this new release and other Db2 versions from the usual Db2 download page. As of this writing, the Db2 Docker image has not been updated.
Friday, November 29, 2019
New Db2 Fix Packs and Mod Packs available
Although the overview page lists Db2 11.5 GA as most recent, the page Mod Pack and Fix Pack Updates for 11.5 reveals new container-only Db2 Mod Pack releases.
Wednesday, November 28, 2018
Db2 2 V11.1 Mod Pack4 and Fix Pack 4 is available
New Db2 Mod / Fix Pack |
- There are several improvements to High Availability when using HADR.
- db2pd has a new option "-barstats" to monitor progress and performance of backup and restore operations.
- ADMIN_MOVE_TABLE has seen some enhancements, including a new ESTIMATE option.
- Application developers benefit from new JSON support that features functions like JSON_QUERY, JSON_TABLE and JSON_EXISTS (Hello, XML developers!).
- Several new data sources can be integrated using the federation capability. This includes SAP HANA, HDFS parquet files and CouchDB.
- A long list of Db2 pureScale improvements for performance, administration and monitoring.
If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.
Friday, December 16, 2016
New Mod Pack and Fix Pack for DB2 V11 available now
New DB2 Fix Pack and Mod Pack available |
If you are one of the administrators or developers impacted by system and code freezes over the last weeks of the year, then the good news is that you can use the time to explore some great enhancements to DB2. Check out the summary page in the DB2 Knowledge Center for an overview. Here are my favorites:
- DB2 supports PKCS #11 keystores now, i.e., Hardware Security Modules (HSMs) can now be used with DB2 extending the choice to local keystores, centrally managed keystores and hardware modules.
- Lots of improvements to DB2 BLU. If you are on Linux OS on IBM z Systems ("z/Linux") then you will be happy about DB2 BLU performance improvements on z13. There are improvements for other processors, too.
- The Workload Manager (WLM) and related monitoring have been enhanced, giving deeper insight and more control of long running complex queries. There are also new features related to CPU management.
Sounds interesting? The updated version of DB2 can be downloaded from the usual support page offering all the fix packs (and now Mod Packs) for the recent versions of DB2.
Friday, June 12, 2015
DB2 pureScale for the Masses
DB2 pureScale Cluster (nanoCluster) |
The following is an unsorted and incomplete list of features and good-to-know items that come to mind when talking about DB2 pureScale:
- License: One of my favorite pages in the DB2 Knowledge Center is the "Functionality in DB2 product editions and DB2 offerings". It shows that pureScale is part of or can be purchased as add-on (Business Application Continuity/BAC Offering) for almost all DB2 editions.
- Application Transparency: How about writing a pureScale-enabled application without knowing about it? There are no special needs to be considered when writing an application to be run on a pureScale cluster.
- Cluster Size and Scale-Out: You can start small and increase the cluster size, even online, depending on your needs.
- Availability: DB2 supports continuous availability of the pureScale cluster by its system design and by features like rolling fixpack updates. Depending on requirements the cluster can span data centers and it is then called Geographically Dispersed pureScale Cluster (GDPC or "stretch cluster"). Basically, if one data center becomes unavailable, DB2 databases continue to be available and are managed from the hardware in the second data center.
Two pureScale clusters can be linked by, e.g., the built-in HADR feature where transaction logs are shipped to and applied to the standby-cluster, increasing availability even further.
For increased availability network adapters on each machine and network switches can be redundant. - Hardware/OS: DB2 runs on the POWER and Intel platform on AIX, Red Hat, and SuSE operatings. Infiniband adapters can be used for highest network performance, but even the GDPC/stretch cluster version of DB2 pureScale only requires a regular TCP/IP network ("vanilla ethernet"). If you don't want to run pureScale on dedicated hardware, no problem, virtual machines (VMware and KVM) and even VM mobilitity are supported, too.
- Smart Workload Management and Processing: Depending on requirements and available resources, applications or specific workloads can be tied to a single node or to a group of computers in the pureScale cluster. This is great for consolidation. Workload balancing distributes the load among the nodes within a group. Over the past years several performance enhancements have also been added to DB2 that cater to consolidation scenarios.
BTW: I have been asked whether a DB2 pureScale cluster can brew a good coffee. What would be your answer...?
Thursday, April 2, 2015
db2audit & syslog: Who Stole my Chocolate Easter Eggs?
Security Audit |
When the DB2 10.5 Cancun Release (Fixpack 4) was announced I mentioned that db2audit records can be transferred to syslog now and I wanted to test it. The command db2audit is used to configure parts of the DB2 audit infrastructure, to archive audit logs, and to extract information from the archived logs. The "extract" option now features a destination "syslog" (from the command syntax):
Audit Extraction .-file--output-file---------------------------------------------------. |--+---------------------------------------------------------------------+--> +-delasc--+---------------------------+--+-----------------+----------+ | '-delimiter--load-delimiter-' '-to--delasc-path-' | '-syslog--facility.priority--+-----------+--+-----------------------+-' '-tag--word-' '-splitrecordsize--byte-'
While the option "file" would store the formatted audit logs in a regular text file, choosing "delasc" would split the log data across several delimited text files, ready for postprocessing in the database. The new option "syslog" can be used to hand over the audit data to the system logger facility. Depending which logger is used and how it is set up it could mean storing the audit records in local message files or sending them over to a central hub for analysis (e.g., by IBM Operations Analytics or Splunk).
DB2 Setup
In order to find the one trying to steal the Easter eggs the audit system would need to be active prior to any attempt. The DB2 audit infrastructure is started with "db2audit start", basic settings can be changed with "db2audit configure". For my tests I left everything set to failure-only logging and changed the archive path to "/tmp". Using the "describe" option, here is how the configuration looked like:
[hloeser@mymachine ~]$ db2audit describe
DB2 AUDIT SETTINGS:
Audit active: "TRUE "
Log audit events: "FAILURE"
Log checking events: "FAILURE"
Log object maintenance events: "FAILURE"
Log security maintenance events: "FAILURE"
Log system administrator events: "FAILURE"
Log validate events: "FAILURE"
Log context events: "FAILURE"
Return SQLCA on audit error: "FALSE "
Audit Data Path: ""
Audit Archive Path: "/tmp/"
It is also a good idea to use a buffer to hold audit records. The audit_buf_sz controls its size:
db2 update dbm cfg using audit_buf_sz 40
The next step in my setup was to create an audit policy in my test database:
create audit policy execfail categories execute status failure,checking status failure, context status failure error type normal
Creating a policy does not mean it is used. The AUDIT statement takes care of it:
audit sysadm,dbadm,dataaccess,user hloeser using policy execfail
Syslog Setup
The above concludes the DB2 portion of the test setup. Next is the optional step of telling the system logger where to place the received DB2 audit data. The DB2 Knowledge Center has some basic information about how to configure the system error and event log (syslog). Without any changes it is possible to dump the audit data to, e.g., "/var/log/messages". I wanted the records go to a separate file. Because my system has rsyslog installed, I needed to edit (as root) the file "/etc/rsyslog.conf". Adding the following line causes all "user"-related records to be written to "user_messages.log" in the directory "/var/log/db2":
user.* /var/log/db2/user_messages.log
It is important to create that directory and file (I used "mkdir" and "touch"), then to restart the syslog facility.
DB2 Audit Logs to Syslog
Once done with the setup I connected to my test database and executed several SQL statements, including a "select * from eastereggs" (a non-existing table). Then I deemed my system ready for moving a first batch of audit records over to syslog. If a buffer for the DB2 audit data is used, it needs to be flushed:
db2audit flush
Thereafter, all the current audit logs need to be archived. This can be done for both the instance and for databases. The following archives the logs for my test database and writes the file to the configured archive path (or the default path if none is specified):
db2audit archive database hltest
After all the configuration and preparation, we are finally at the really interesting part, the new extract option. Using "syslog" as destination and the category "user" with the priority level "info", the audit logs are handed over to the system error and event logger:
db2audit extract syslog user.info from files /tmp/db2audit.*
Did the logs really make its way over from DB2 to the system infrastructure? Here is my successful test:
[hloeser@mymachine ~]$ sudo grep -i easter /var/log/db2/user_messages.log Apr 2 13:32:10 mymachine db2audit: timestamp=2015-04-02-13.31.09.089507; category=CONTEXT; audit event=PREPARE; event correlator=40; database=HLTEST; userid=hloeser; authid=HLOESER; application id=*LOCAL.hloeser.150402095529; application name=db2bp; package schema=NULLID; package name=SQLC2K26; package section=201; text=select * from eastereggs; local transaction id=0x3266020000000000; global transaction id=0x0000000000000000000000000000000000000000; instance name=hloeser; hostname=mymachine;
Happy Easter and hopefully some chocolate eggs are left for you!
Friday, December 12, 2014
New fixpack for DB2 10.5 brings in-memory analytics to Windows and zLinux
After this introduction I would like to point out two product enhancements that are included in this fixpack:
As you may know, "BLU Acceleration" is the technology codename for highly optimized in-memory analytics that is deeply integrated into the supported platforms. It is not just another column store, but optimizes the data flow from disk to the CPU registers to efficiently use the available processing power and memory resources. DB2 is also exploiting special CPU instruction sets, e.g., on the POWER platform, for faster data processing. With the fixpack 5 this technology is available now on Microsoft Windows and for Linux on zSeries.
Another feature enhancement is the new ability to specify which network interface cards (NICs) DB2 should use, if you have multiple. A new file nicbinding.cfg can be used to set up the bindings. If you had to deal with db2nodes.cfg before, then the syntax will look familiar.
That's all for my quick summary. Enjoy the weekend AND DB2.
Monday, September 22, 2014
Enforce backup encryption with encrlib and encropts
The database configuration parameter "encrlib" can be pointed to the encryption library by providing the file path. Only the security administrator is allowed to change the configuration. Once set, the library is automatically used for every database backup. The configuration variable "encropts" can hold additional parameters needed for the encryption (library). Again, only SECADM can change the value.
If you have a database encryption toolkit such as InfoSphere Guardium Data Encryption in use, then the new options provide a simple, auditable way for the security administrator to make sure, database backups are secure, too.
Tuesday, September 2, 2014
New DB2 Cancun Release (Version 10.5 Fixpack 4) offers many enhancements
A good start for approaching the new DB2 Cancun release is the fixpack summary in the Knowledge Center. It lists new features by category, my personal highlights are:
- For the in-memory database support (referred to as "column-organized tables" and known as "BLU Acceleration" some bigger items include so-called shadow table to improve analytic queries in an OLTP environment, lifting of several DDL restrictions, and major performance improvement by adding CHAR and VARCHAR columns to the synopsis table. An in-memory database can be made highly available with the HADR feature.
- DB2 pureScale clusters can be deployed in virtualized environments (VMware ESXi, KVM), on low-cost solutions without the RDMA requirement, and geographically dispersed cluster (2 data centers) can be implemented on AIX, Red Hat, SuSE with just RoCE as requirement.
- As part of the SQL compatibility DB2 now supports string length definitions by characters, not just by bytes as before.
- Installation of DB2 in so-called thin server instances.
- A SECADM can enforce encryption of backups.
- db2audit can be used to transfer audit records to syslog for simpler analyzation with, e.g., Splunk.
- db2look has been improved to generate the CREATE DATABASE statement and export the configuration (see my earlier blog article on that db2look improvement in DB2 10.1).
- Official support for POWER8.
Last but not least: What is your favorite vacation destinations? Suggest new codenames as comment and don't forget new DB2 features you want to see...
Monday, June 2, 2014
Improved db2look in DB2 to mimic database environments
As it is new, I wanted to test it myself. First I created a database "lt" (as in "Look Test") with non-standard options. Next was to invoke db2look:
db2look -d lt -createdb -printdbcfg -o lt.out
-- No userid was specified, db2look tries to use Environment variable USER
-- USER is: HLOESER
-- Output is sent to file: lt.out
-- Binding package automatically ...
-- Bind is successful
-- Binding package automatically ...
-- Bind is successful
The generated output file starts with the usual environment and version information, then follows the section to recreate the database:
--------------------------------------------------------
-- Generate CREATE DATABASE command
--------------------------------------------------------
CREATE DATABASE LT
AUTOMATIC STORAGE NO
USING CODESET ISO8859-1 TERRITORY de
COLLATE USING IDENTITY
PAGESIZE 8192
DFT_EXTENT_SZ 32
...
;
As you can see, I didn't use automatic storage, used a local, non-Unicode codepage and German territory, an identity collation and 8 kByte pages. Thereafter follow the parameters for the catalog, temporary, and user tablespaces (not shown). After the database creation is completed, the next is the CONNECT statement:
CONNECT TO LT;
Once the database connection is established, another new section starts. It reapplies the database configuration:
--------------------------------------------------------
-- Generate UPDATE DB CFG commands
--------------------------------------------------------
-- The db2look command generates the UPDATE DB CFG statements
-- to replicate the database configuration parameters based on
-- the current values in the source database.
-- For the configuration parameters which support AUTOMATIC,
-- you need to add AUTOMATIC to the end
-- if you want the DB2 database to automatically adjust them.
--UPDATE DB CFG FOR LT USING ALT_COLLATE ;
UPDATE DB CFG FOR LT USING STMT_CONC OFF ;
UPDATE DB CFG FOR LT USING DISCOVER_DB ENABLE ;
UPDATE DB CFG FOR LT USING DFT_QUERYOPT 5 ;
UPDATE DB CFG FOR LT USING DFT_DEGREE 1 ;
...
Right now the enhancements are only available in the just recently released fixpack of DB2 10.1. As with other improvements, I would expect it to be available for the newer DB2 10.5 release soon.
Tuesday, April 29, 2014
Trimming the fun? LTRIM and RTRIM extended
To trim your calories you could do the following:
db2 => values rtrim('All I eat: marzipan, vegetables, fruits',' ,abefgilrstuv')
1
---------------------------------------
All I eat: marzipan
1 record(s) selected.
The enhanced LTRIM and RTRIM functions can be used together with other functions of course:
db2 => values replace(ltrim('jogging and eating are great',' adgijno'),'are','is')
1
----------------------------
eating is great
1 record(s) selected.
The examples are just some food for thought about what is possible.
Tuesday, February 18, 2014
DB2 10.5 Fixpack 3 is available
May all your transactions commit...
Friday, October 11, 2013
DB2 V10.5 Fixpack 2 available
Enjoy the weekend!
Friday, August 23, 2013
DB2 10.5 Fix Pack has been released
Monday, December 10, 2012
DB2 fixpacks, support, APARs, and other information
A good starting point is the IBM Support Portal. It requires a so-called "IBM ID" to manage a profile. There you can define RSS news feeds or email subscriptions to many of the IBM products, including the Information Management offerings. Information you can subscribe to include new or update Technotes (example: updated Technote on recommended fix packs for Data Server Client Packages), on fixes (example: IC84157, Crash recovery may fail if the mirror log...), product and packaging information (example: Mobile Database now included...), etc.
Once the new fixpack is available I usually first read the Fix pack Summary in the DB2 Information Center. It describes the high-level changes in the fixpack.
On the support pages you will also find an overview of the available fix packs for the different supported versions of DB2. When you click on one of the fixpacks, there are additional links leading to, e.g., the list of security vulnerabilities, HIPER and special attention APARs fixed in DB2 (here V10.1, FP2) or the list of the fixes (fix list) for that release. By the way: HIPER stands for High Impact or PERvasive, i.e., bugs with critical impact. APAR is Authorized Program Analysis Report and basically is a formal description of a product defect. Customers usually open a PMR (Problem Management Report) which may lead to an APAR (or not).
Friday, December 7, 2012
DB2 10.1 Fixpack 2 is available
Wednesday, November 28, 2012
Quiz: Where is a good place to find information about DB2 fixpacks?
Post your sources as comments or let me know in other ways.
Thursday, September 20, 2012
DB2 10.1 - the first fixpack is out
- The so-called Fix Pack Summary in the DB2 Information Center,
- and here in the overview DB2 Fix Packs by version you can find all the available fixpacks from DB2 10.1 to DB2 8.2
Some of the new features or enhancements for DB2 10.1 were already included in DB2 9.7 FP6 and needed to be ported. An example of this is the support for XML type for global variables and in compiled SQL functions.