Friday, December 18, 2015

DB2: The last query...

Bodensee/Lake Constance in Winter
I am always amazed at what DB2 is capable of. Today, I would like to show you my last DB2 query for this year. All what is needed comes with the regular DB2 and is even included in the free DB2 Express-C. You could run it in your own DB2, locally or in the IBM DB2 on Cloud service on Bluemix. Enough of introduction, enough of my words for this year, here is my last DB2 query for this year...


[hloeser@magicmachine]$ db2sampl

  Creating database "SAMPLE"...
  Connecting to database "SAMPLE"...
  Creating tables and data in schema "HLOESER"...
  Creating tables with XML columns and XML data in schema "HLOESER"...

  'db2sampl' processing complete.

[hloeser@magicmachine]$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUXX8664 10.5.6
 SQL authorization ID   = HLOESER
 Local database alias   = SAMPLE


[hloeser@magicmachine]$ db2 "select substr(note_text,length(note_text)-36,18) from in_tray where source='CHAAS'"

1                
------------------
Enjoy the holidays

  1 record(s) selected.



Monday, December 7, 2015

MySQL-Style LIMIT and OFFSET in DB2 Queries

I was recently asked whether DB2 support MySQL-style syntax to page through query result sets. The good news is that DB2 supports LIMIT and OFFSET in addition to its own syntax. The only drawback is that the extra syntax supports needs to be enabled before it can be used. In the following I am going to show you how.
MySQL syntax in DB2

To support features offered by other database systems like Oracle, Microsoft SQL Server, Sybase, or MySQL that are incompatible with existing DB2 features, the DB2_COMPATIBILITY_VECTOR registry variable was introduced in DB2 version 9.7. The vector is used to selectively enable features and there are predefined settings named ORA (enable all Oracle features), SYB (Sybase), and MYS (MySQL). With that background knowledge we can directly get started on enabling LIMIT and OFFSET.

Monday, November 23, 2015

IDUG 2015 in Dublin: A week with friends

Last week I attended the IDUG DB2 Tech Conference in Dublin, Ireland 2015. It was a week together with 500+ friends, old and new. A week of sharing information about DB2, learning cool new stuff (even as longtime user, developer, consultant, ...), making connections and friends, and bringing back a long list of things to try out back home and dig deeper into.
IDUG 2015 in Dublin, Ireland

The conference itself started on Monday with 2 tracks each for DB2 on z/OS, DB2 for Linux, UNIX, and Windows, and for application development. Sometimes it was hard to pick a session because there were several interesting presentations in parallel. Even as "friend of DB2" for many years there was a lot to learn, both from users presenting about their experience with DB2 or IBMers sharing latest features, interesting details, and experience from customer projects. I was astonished that even after long days with receptions and dinners in the evening, and not much time for sleep, the early morning sessions were packed with a highly interested crowd. There were lively discussions all the time.

Although the conference focused on using DB2 "on premise", there were a couple sessions around using DB2 "on cloud", either as DB2 itself or in its variation dashDB on Bluemix. The cloud offering is interesting for innovation and prototyping projects, for consolidating smaller instances, or as a form of redundant storage. I am sure we will see more sessions and customer stories in the future.

A big THANK YOU to all the IDUG volunteers who made this event possible. I am looking forward to IDUG DB2 Tech Conference 2016 which will be in Brussels, Belgium. The call for presentations is already open and I highly recommend submitting a session proposal.

Friday, October 30, 2015

DB2, Halloween, and a Time Saver

A roadblock?
Well, right now I am busy with all kinds of things, juggling different database and cloud topics. Last weekend, Europe has switched back from Daylight Saving Time or Summer time to standard time, North America will follow this weekend. During this process, supposedly, you get one additional hour of sleep or time for other stuff. In my case it turned out that I spent the hour changing wall clocks, alarm clocks and watches. Anyway, here is a real time saver for DB2...

DB2 has the function TIMESTAMP_FORMAT (with synomy TO_DATE and TO_TIMESTAMP) to convert string values to timestamps or dates. The expected date and time format can be specified via the usual placeholders, e.g., MM for months or SS for seconds. However, there are some tricky parts to it and I am going to treat you with a solution (this was the "trick & treat" Halloween reference...):

db2 "values to_date('30.10.2015 00:00:00','DD.MM.YYYY HH:MI:SS')"
SQL20448N  "31.10.2015 00:00:00" cannot be interpreted using format string
"DD.MM.YYYY HH:MI:SS" for the TIMESTAMP_FORMAT function.  SQLSTATE=22007


db2 "values timestamp_format('30.10.2015 24:00:00','DD.MM.YYYY HH:MI:SS')"
SQL20448N  "31.10.2015 24:00:00" cannot be interpreted using format string
"DD.MM.YYYY HH:MI:SS" for the TIMESTAMP_FORMAT function.  SQLSTATE=22007


In both cases I am trying to convert midnight on October 31st to an internal timestamp value. Both calls return an error. The reason is that "HH" is the same as "HH12", the 12-hour format. Using "HH24" (24-hour format) everything is ok:

db2 "values timestamp_format('31.10.2015 00:00:00','DD.MM.YYYY HH24:MI:SS')"

1                        
--------------------------
2015-10-31-00.00.00.000000

  1 record(s) selected.


db2 "values timestamp_format('31.10.2015 24:00:00','DD.MM.YYYY HH24:MI:SS')"

1                        
--------------------------
2015-10-31-24.00.00.000000

  1 record(s) selected.


If you want to use the HH/HH12 format, you need to specify where to expect the so-called Meridian indicator (AM/PM):

db2 "values timestamp_format('31.10.2015 12:00:00am','DD.MM.YYYY HH:MI:SSAM')"

1                        
--------------------------
2015-10-31-00.00.00.000000

  1 record(s) selected.


db2 "values timestamp_format('31.10.2015 12:00:00am','DD.MM.YYYY HH:MI:SSPM')"

1                        
--------------------------
2015-10-31-00.00.00.000000

  1 record(s) selected.


That's all, enjoy the weekend and Halloween...

Monday, October 12, 2015

Utilizing User-Provided Services to Link Bluemix Apps to External Databases

Recently, I wrote a Bluemix app which only utilized one of the provided runtimes (Python, node.js, Java, ...). The database was external to the Bluemix environment and didn't come out of the many offerings in the "Data and Analytics" catalog. As I wanted to keep my app as generic as possible, I searched for a way of how to link app and database. One solution is to use so-called user-provided service. Here is what I did.

Once I had provisioned my runtime on Bluemix (Python for this example), the app showed up on the dashboard. As can be seen, it is not linked to any service. Because the database is not provided out of the Bluemix catalog, there isn't any entry or reference to the database yet. What can be done now is to create a Cloud Foundry user-provided service using the command-line interface (CLI):
 
App with Runtime only on Bluemix

Monday, September 28, 2015

Altering tables in the SQLDB service on Bluemix

My blog has been quiet for some weeks thanks to some offline vacation and due to some business trips. Last week I first spoke about DB2 at a conference, then coached developers at a Bluemix hackathon. At that Bluemix event I was asked whether it would be possible to change some column definitions for an existing table. It was easy to say "absolutely", but it turned out to be harder than thought because at first I couldn't how to do it. So let's document it... :)

Once you have provisioned a SQLDB database service on Bluemix and launch the administration console, you are greeted by the "Getting Started" screen prominently featuring the following three choiced:
IBM SQL Database: Getting Started

The "Run Queries" button is intended to perform SQL SELECTs, nothing else.

Monday, August 24, 2015

Keeping track of "my" DB2

Some years back when talking about DB2, there was only the "I am on the host" or "We are mostly OLTP" distinction. My standard phrase has been "I am a distributed guy". Well, today, when only talking about DB2 for Linux, UNIX, and Windows ("DB2 LUW"), some more description is needed to clearly state what, where, and how the product is used. Here is a small overview of my world of DB2 LUW.


If you have not done so, I would recommend taking a look at the IBM Data Server Manager as fairly new tool to monitor, administrate, and tune DB2. The experience in using the web-based GUI can directly be applied to the administration interfaces of the related DB2 services in the cloud, sqldb and dashDB.

Wednesday, August 12, 2015

Wired or weird Tweets? Doesn't matter to Bluemix, DB2, and Node-RED...

Robot with
a personality?
Someone to talk to, someone to take the meeting minutes. Well, I can offer that - at least partially. I took the Node-RED service on Bluemix for a test drive and
created a Twitter service. It automatically can respond to tweets and can also store the incoming messages in a DB2 table. In my example it is just a, more or less, silly service right now, but it could be expanded to react to and log Twitter-based customer feedback, provide some basic interim reaction or acknowledgement. Let me show you how easy it is to get started...

To build the Twitter service you need the "Node-RED Starter" boilerplate from the Bluemix catalog as well as the "SQL Database" service. Create the Node-RED boilerplate first, then add the database service and directly bind it to the new application. Once everything has been provisioned, it is time for wiring up the Twitter service in the flow editor. The following picture gives an overview of the two flows when completed. I made the flows available in the Node-RED library.

Monday, July 20, 2015

Bluemix: Simple cron-like service for my Python code

Schedule background tasks in Bluemix
This morning I had some time to fiddle around with Bluemix and Python. I wanted to test out creating a cron-like service, i.e., something that runs in the background and kicks off tasks with given intervals or at a given time. For Python there exists a package "schedule" that is easy to set up and use. So how do you create a background service in Bluemix?

The trick is in the file manifest.yml and the Cloudfoundry documentation has all the needed details (Bluemix is built on this open standard). The attribute "no-route" is set to true, indicating that this is not a Web application and we don't need a subdomain name. In addition the attribute "command" is set to invoke the Python interpreter with my script as parameter. Basically, this starts my background task:

 applications:  
 - name: hltimer  
  memory: 256M  
  instances: 1  
  no-route: true  
  command: python mytimer.py  
  path: .  


The cron-like service script is pretty simple. It uses the "schedule" package to set up recurring jobs. I tested it with the Twilio API to send me SMS at given timestamps. You can also use it to scrape webpages in given intervals, kick off feed aggregators, read out sensors and update databases like Cloudant or DB2, and more. See this Github repository for the full source.

 import schedule  
 import time 

 def job():  
 #put the task to execute here  

 def anotherJob():  
 #another task can be defined here  

 schedule.every(10).minutes.do(job)  
 schedule.every().day.at("10:30").do(anotherJob)  

while True:  
   schedule.run_pending()  
   time.sleep(1)  

Friday, July 17, 2015

DB2 Encryption: Some rotation before the weekend

Some bits on encryption...
Some while ago I wrote about DB2 encryption and how to rotate the master key. Well, today I rotated the master key again and looked at the metadata before and after the key rotation. I started with an encrypted database that I created some months ago.

The first step was to obtain the encryption information, i.e., metadata about the encrypted database. This can be done in two ways: Either using the utility db2pd or by calling the table function ADMIN_GET_ENCRYPTION_INFO.

Wednesday, July 1, 2015

DB2 Battle: Optimization Profile vs. Statement Concentrator (Part 3)

db2expln output - statement concentrator active
In February I had fiddled with optimization profiles and REOPT working against the statement concentrator (part 2 is here). Today I wanted to give an interesting (hopefully) update. The tool db2expln can be used to describe the access plan for static and dynamic SQL statements. It works in a different way than db2exfmt and visual explain (part of tools like Data Studio and IBM Data Server Manager). Thus, I was eager to see whether it could help to find out whether my optimization profile was applied.


Monday, June 29, 2015

DIY: 3D-printing a quadrocopter (or hardware for IoT)

3D-printed quadrocopter/drone
Over the weekend I had the opportunity to get deeper into 3D printing. I got invited to Maker World 2015 in Friedrichshafen and to a workshop run by the guys of Construction Zone. The goal was to get an overview of 3D printing by designing parts for a small quadrocopter, printing them, and eventually assembling printed parts and electronic components to a flight-ready drone. Creating the casing for Internet-of-Thing (IoT) devices basically is the same.

The mini drone and its parts can be seen on the picture above. It has a printed body, a printed cover, a small board (in the middle of the drone), a battery pack, and 4 motors. The first step in getting the copter up into the air was to design the body and cover as seen on the picture below. Some attention needs to be applied to keeping the rotors in a safe distance to each other and to have space within the body to house the battery pack and the controller board.

Design process for a DIY quadrocopter
Once the design is complete, the next step is to create a STL file and/or G-code for the 3D printer. During that process the printing quality is specified. It is based, among others, on the number and thickness of printing layers. And that directly impacts the time needed for printing. What I learned is that patience is a virtue...

Friday, June 12, 2015

DB2 pureScale for the Masses

DB2 pureScale Cluster (nanoCluster)
It is almost four years since I wrote about my exciting tests with a DB2 pureScale demo cluster (see picture). At that time the pureScale feature was still fairly new and was supported on a limited choice of hardware/software. Since then the requirements to put DB2 pureScale into production have been reduced dramatically and at the same time many useful features have been added. Let me give a quick overview of why customers choose DB2 with pureScale for both scaling out systems as well as a consolidation platform. Today, DB2 pureScale really is something for the masses...

The following is an unsorted and incomplete list of features and good-to-know items that come to mind when talking about DB2 pureScale:

All of the above boils down to cost savings and higher flexibility, an important driver behind opting for pureScale. Having said that I need to mention that earlier this week I decided against pureScale. I gave a nanoCluster, the same type as pictured above, away to make some room in my home office.

BTW: I have been asked whether a DB2 pureScale cluster can brew a good coffee. What would be your answer...?

Friday, May 29, 2015

Bluemix: Combining the Cloud App with my On-Premise DB2 Using the Secure Gateway

Bluemix Secure
Gateway
Another Friday, another opportunity to have some fun with Bluemix and DB2. Today, I fiddled with the Secure Gateway service that enables everybody to combine cloud-based applications with internal, on-premise services. It kind of is the building block to create a hybrid solution or, how it is sometimes called, to link systems of engagement (cloud apps) with systems of record (enterprise database systems like DB2, hosted in your own data center). After this introduction, let the fun begin...

Monday, May 11, 2015

My DB2 pureScale / Parallel Sysplex Moment

Last week, members of the German union for train operators/engineers were on strike and only a fraction of trains were operating. I had to go to Zurich, Switzerland, but most of the trains on my usual route were out of service. That's when I had my DB2 pureScale or DB2 Parallel Sysplex moment.

What is special about a DB2 Parallel Sysplex or DB2 pureScale configuration? It is a data sharing cluster. If one node in the cluster or a network connection goes down and is inaccessible, the other components can pick up the work. All this is usually transparent to the application. In my case I was the application and data at the same time: Try to get me to Zurich, discuss some issues in person, get back home.

Wednesday, May 6, 2015

Using dashDB or DB2 with Apache OpenOffice (or LibreOffice)

Yesterday evening, I had some time on my own and didn't want to read. Why not surf in the Internet? That's why there are services like YouTube or Bluemix where you can spend hours without too much thinking...  :) I ended up fiddling with dashDB on Bluemix and building reports with my local OpenOffice. Here are the details of my evening activity.

Run query against dashDB in Excel
When I played with dashDB, a fully managed data warehouse service, I came across the "Run Query in Excel" button. That button shows up in the web-based query interface where you can compose SQL queries and execute them against the dashDB database. I got curious because my Linux machine only has Apache OpenOffice installed. ODC (Office Data Connection) files are not supported (yet) by OpenOffice and LibreOffice, but the programs offer the component "Base" as database (front-end) similar to Microsoft Access. So why not try to hook up the cloud-based dashDB and my local OpenOffice? This can be done using a JDBC connection.

Monday, May 4, 2015

Dealing with DB2 Security and my Stupidity...

Today I wanted to test some options around encrypting DB2 backups and restoring encrypted database backups. I can report that the security features work, only my stupidity (or call it "vacation readiness") caused some delays.

In my previous blog entries I already showed you how to create an encrypted DB2 database using the new "native encryption" and how to rotate the master keys using built-in procedures. Next in my list to test was taking an encrypted database backup. It is pretty straight-forward, just call BACKUP DATABASE and add the ENCRYPT option:

[henrik@mymachine]$ db2 backup database enc1 encrypt

Backup successful. The timestamp for this backup image is : 20150504135739



Tuesday, April 21, 2015

My 10 Minute DB2 App on Bluemix

I needed some fun at the end of the workday today. Why not enhance my own skills, do some "coding", and try out Node-RED on Bluemix again (I performed a "phoney" test in October already). This time, I wanted to see whether a database like DB2 or dashDB is supported as backend.

Node-RED Starter: This application demonstrates how to run the Node-RED open-source project within IBM Bluemix.
Node-RED Starter
Node-RED
storage nodes
To get started I logged into my free trial account on http://bluemix.net. I selected the "Node-RED Starter" from the catalog and created the app skeleton. After the app was staged (the resources allocated, files copied, and the skeleton app started), I could begin wiring up my application logic. My first look on the wiring board went to the list of supported storage nodes. What was new since my last test is support for dashDB and "sqldb" (DB2) as input and output nodes. That is, it is possible to store data in those two database systems (output) and bring data into the app by querying a database (input).

To use DB2 in Node-RED, you need to create a sqldb service in Bluemix first. This meant going back to the catalog, skipping to the Data Management section, selecting the sqldb service, and adding it to my application (see picture below). Thereafter, my application needed to be restaged (restarted), so that it could pick up the properties of the sqldb service.

Wednesday, April 15, 2015

German DB2 User Group (DeDUG), 08.05., with/mit Leo Martin (ex-BND)

(in German because of German User Group)

Das nächste Treffen der deutschen DB2 User Group (DeDUG) findet am 08. Mai bei IBM in Ehningen  statt. Auf der Agenda stehen neben interessanten DB2-Themen auch ein Blick auf dashDB, einem Data Warehousing-Service in der Cloud. Höhepunkt der Veranstaltung dürfte diesmal allerdings der Vortrag von einem besonderen Gast sein. Der studierte Kriminalwissenschaftler und ex-Geheimdienstler Leo Martin wird über "Geheimwaffen der Kommunikation" sprechen, etwas, das beruflich und privat sicherlich gut nutzbar sein wird.

Mehr Informationen und die Möglichkeit zur Anmeldung gibt es hier oder hier.

Thursday, April 2, 2015

db2audit & syslog: Who Stole my Chocolate Easter Eggs?

Security Audit
Easter is coming up, a slow day due to vacation and Spring Break, time to try out some, in advance, of the chocolate Easter eggs. Gosh! They are gone! Who took them? It must have been someone with insight. Let's take a look at the audit logs...

When the DB2 10.5 Cancun Release (Fixpack 4) was announced I mentioned that db2audit records can be transferred to syslog now and I wanted to test it. The command db2audit is used to configure parts of the DB2 audit infrastructure, to archive audit logs, and to extract information from the archived logs. The "extract" option now features a destination "syslog" (from the command syntax):

Audit Extraction

   .-file--output-file---------------------------------------------------.   
|--+---------------------------------------------------------------------+-->
   +-delasc--+---------------------------+--+-----------------+----------+   
   |         '-delimiter--load-delimiter-'  '-to--delasc-path-'          |   
   '-syslog--facility.priority--+-----------+--+-----------------------+-'   
                                '-tag--word-'  '-splitrecordsize--byte-'     

While the option "file" would store the formatted audit logs in a regular text file, choosing "delasc" would split the log data across several delimited text files, ready for postprocessing in the database. The new option "syslog" can be used to hand over the audit data to the system logger facility. Depending which logger is used and how it is set up it could mean storing the audit records in local message files or sending them over to a central hub for analysis (e.g., by IBM Operations Analytics or Splunk).


DB2 Setup
In order to find the one trying to steal the Easter eggs the audit system would need to be active prior to any attempt. The DB2 audit infrastructure is started with "db2audit start", basic settings can be changed with "db2audit configure". For my tests I left everything set to failure-only logging and changed the archive path to "/tmp". Using the "describe" option, here is how the configuration looked like:


[hloeser@mymachine ~]$ db2audit describe
DB2 AUDIT SETTINGS:

Audit active: "TRUE "
Log audit events: "FAILURE"
Log checking events: "FAILURE"
Log object maintenance events: "FAILURE"
Log security maintenance events: "FAILURE"
Log system administrator events: "FAILURE"
Log validate events: "FAILURE"
Log context events: "FAILURE"
Return SQLCA on audit error: "FALSE "
Audit Data Path: ""
Audit Archive Path: "/tmp/"


It is also a good idea to use a buffer to hold audit records. The audit_buf_sz controls its size:
db2 update dbm cfg using audit_buf_sz 40

The next step in my setup was to create an audit policy in my test database:
create audit policy execfail categories execute status failure,checking status failure, context status failure error type normal

Creating a policy does not mean it is used. The AUDIT statement takes care of it:
audit sysadm,dbadm,dataaccess,user hloeser using policy execfail

Syslog Setup
The above concludes the DB2 portion of the test setup. Next is the optional step of telling the system logger where to place the received DB2 audit data. The DB2 Knowledge Center has some basic information about how to configure the system error and event log (syslog). Without any changes it is possible to dump the audit data to, e.g., "/var/log/messages". I wanted the records go to a separate file. Because my system has rsyslog installed, I needed to edit (as root) the file "/etc/rsyslog.conf". Adding the following line causes all "user"-related records to be written to "user_messages.log" in the directory "/var/log/db2":

user.*                            /var/log/db2/user_messages.log

It is important to create that directory and file (I used "mkdir" and "touch"), then to restart the syslog facility.


DB2 Audit Logs to Syslog
Once done with the setup I connected to my test database and executed several SQL statements, including a "select * from eastereggs" (a non-existing table). Then I deemed my system ready for moving a first batch of audit records over to syslog. If a buffer for the DB2 audit data is used, it needs to be flushed:
db2audit flush

Thereafter, all the current audit logs need to be archived. This can be done for both the instance and for databases. The following archives the logs for my test database and writes the file to the configured archive path (or the default path if none is specified):
db2audit archive database hltest

After all the configuration and preparation, we are finally at the really interesting part, the new extract option. Using "syslog" as destination and the category "user" with the priority level "info", the audit logs are handed over to the system error and event logger:
db2audit extract syslog user.info from files /tmp/db2audit.*

Did the logs really make its way over from DB2 to the system infrastructure? Here is my successful test:
[hloeser@mymachine ~]$ sudo grep -i easter /var/log/db2/user_messages.log Apr  2 13:32:10 mymachine db2audit: timestamp=2015-04-02-13.31.09.089507; category=CONTEXT; audit event=PREPARE; event correlator=40; database=HLTEST; userid=hloeser; authid=HLOESER; application id=*LOCAL.hloeser.150402095529; application name=db2bp; package schema=NULLID; package name=SQLC2K26; package section=201; text=select * from eastereggs; local transaction id=0x3266020000000000; global transaction id=0x0000000000000000000000000000000000000000; instance name=hloeser; hostname=mymachine;

Happy Easter and hopefully some chocolate eggs are left for you!

Friday, February 27, 2015

DB2 Battle: Optimization Profile vs. Statement Concentrator (Part 2)

Today I wanted to try out using DB2 optimization profiles for a statement impacted by the statement concentrator. In part 1 I gave the background, showed how I created an optimization profile and that a simple explain statement didn't return what I expected. In this final part I am going to look at DB2 section actuals to hopefully proof that my optimization guideline works as I had hoped.

Because all my "explain plan for select ..." statements resulted in the same access plan, I decided to use session actuals to look at how statements are really executed within DB2. The actuals are kind of a live log of the real statement execution costs and the applied access plan. The first step towards session actuals is to have a workload and an activity event monitor and to switch the monitor on:


db2 "create workload betw applname('python') collect activity data with details,section"
db2 "create event monitor betwmon for activities write to table"
db2 "set event monitor betwmon state 1"

The above statements create a workload which collects section data. The application name (APPLNAME) is "python" because I use a Python script (see below) for parts of the testing.


Script:
import ibm_db
conn = ibm_db.connect("hltest","hloeser","secretpw")
ibm_db.exec_immediate(conn, 'set current optimization profile="HLOESER"."PROFILE_BETW"')
ibm_db.exec_immediate(conn, 'select id, s from betw where id between 2 and 20')



DB2 Battle: Optimization Profile vs. Statement Concentrator (Part 1)

DB2 explain output
Today I wanted to try using a DB2 Optimization Profile for a statement impacted by the DB2 Statement Concentrator. It turned out to be a longer exercise than thought, but also with more fun, covering a lot of the DB2 tuning and monitoring infrastructure. So set some minutes aside and follow my journey into the world of query optimization, workload monitoring, session actuals, and more. I split it into two parts because of length (Update on 02-Jul-2015: A third part is here).

The original question I tried to answer was related to REOPT and a query like this:
select id, s
from betw
where id between ? and ?