Wednesday, December 14, 2011

If you are late, just say you are super asynchronous... (HADR)

Well, not everybody is on time and waiting for them can hold up a meeting, important decisions, or completing tasks. The same waiting can happen in an IT environment for various reasons, e.g., a replication or synchronization process taking longer because of network congestion or temporarily overloaded hardware. But what can you do when other tasks are waiting for?

When using DB2 HADR (High Availability Disaster Recovery) some thoughts have to be put into the decision of what synchronization mode to use. Starting with DB2 9.7 FP 5 there are now 4 instead of 3 modes offered:
  • SYNC (synchronous): the transaction log has been written on both the primary and standby
  • NEARSYNC (nearly synchronous): the transaction log has been written on the primary and has been transferred to the standby
  • ASYNC (asynchronous): the transaction log has been written on the primary and is ready to be sent over to the standby
  • SUPERASYNC (super asynchronous): the transaction log has been written on the primay
The SYNC mode needs the most work before a transaction is considered committed the SUPERASYNC mode the least. On the reverse, the SUPERASYNC mode has the most potential for loss of a transaction in the case of a failure. It is the traditional battle between performance and consistency as we have with isolation levels in database systems.

If you want to learn more about HADR for DB2, take a look at the best practices section on developerWorks. In August a new paper on DB2 HADR has been added that focuses on the configuration and tuning of HADR environments. And remember, you are not reading the paper late. You are just super asynchronous with the news...

Friday, December 9, 2011

5 minutes, 60 seconds, and ATS: the Friday learning

It is Friday afternoon and a slow day, time to look up an interesting topic in the DB2 Information Center: ATS.
  • What is ATS? It is the Administrative Task Scheduler.
  • It enables DB2 to automate the execution of tasks.
  • 5 minutes? Every 5 minutes ATS checks for new or updated tasks and executes them if needed.
  • 60 seconds? If the ATS daemon fails to execute a task, it will retry the execution every 60 seconds.
  • Don't overload! If a specific task is still running, ATS won't start another instance of the same task.
  • ADMIN_TASK_STATUS is an administrative view to retrieve status information.
  • SYSTOOLSPACE is the place where task data is stored. It is a user data tablespace.
  • And, last but not least, what is a task...? Tasks need to be encapsulated in user-defined or system-defined procedures. ADMIN_CMD is such a system-defined procedure which can be used to backup a database, run reorg, collect statistics via runstats, etc.
So much as introduction, you can read more details at the linked pages.

Thursday, December 8, 2011

Scaling of memory-related configuration settings for tests of DB2 LUW

In my recent post I wrote about some available workload drivers. Today, I would like to point you to an interesting article on developerWorks: Scaling of DB2 for Linux, UNIX, and Windows memory-related configuration parameters on a test system. In that article the authors describe a way of simulating a production system on a test system by scaling down (or up) some of the memory settings. They describe a constrained and a relaxed approach for either a system with fixed settings or one with flexible settings like when STMM is in use. A Perl script is provided so you can use the approach for your own testing.

Monday, December 5, 2011

WMD: Weapon of mass destruction? No, Workload Multiuser Driver!

When you hear of WMD, for many of you a term other than Workload Multiuser Driver may come up first. But it is the term and hence the acronym the team over at the Sourceforge project working on this add-on to the DB2 Technology Explorer chose. The WMD is a RESTful web service which allows multiple users to concurrently run different workloads against DB2 and WMD can be controlled from the Technology Explorer.

This is of course interesting when you want to showcase certain features of DB2. However, as the WMD is a component of its own and can be downloaded as such, it is also one of the options to set up your own performance or system tests. And this brings me to the question I was recently asked: What free workload drivers do you know of for DB2?

In addition to the WMD there is also a workload driver in the TPoX (Transaction Processing over XML data) benchmark, another Sourceforge project for DB2. It cannot be downloaded separately, but it is documented and can be adapted to your own needs.

What other free workload drivers do you know of, which ones do you prefer?

Tuesday, November 22, 2011

All the news: DB2 Express-C with PL/SQL, new TPoX version, DS 3.1, and hello to all PMs

I am in between business trips and there is not much time for looking deeper into any specific problems. But I wanted to touch base on few things that are new:

  • DB2 Express-C, the free to download, free to use edition of DB2 LUW now includes the Oracle compatibility. That is, you can develop and use PL/SQL packages with DB2, for free, even in production. Although it still says "9.7.4" at the DB2 Express-C download site, when you click through it then offers DB2 9.7.5 for download. I just tried it.
  • A new version of TPoX, the open source XML database benchmark, has been released. Most changes are to the workload driver. I know that some of you use the workload driver not just for TPoX and DB2.
    BTW: The DB2 Technology Explorer/Management Console includes a so-called Workload Multiuser Driver (WMD) that can be handy, too.
  • IBM Data Studio 3.1 is out since few weeks and Data Studio will replace the DB2 Control Center in the near future. There is a so-called Administration Client (which is small) and a Full Client. Both have a different download size and a different function set. An overview of what is included and which database servers in addition to DB2 are supported is listed at this Data Studio V3.1 features document.
Last but not least I would like to say Hello to all Project Managers (link to Dilbert comic). Dining out and working in large projects never will be the same again...

Wednesday, November 9, 2011

Monitoring: Three new for three old monitoring functions

I was reading over the list of new, deprecated and discontinued functionality after DB2 9.7 Fixpack 5 has been made available. What I found interesting is that 3 old snapshot monitoring table functions and their related views are now listed as deprecated which means they may be removed in a future release. Many of you know that DB2 9.7 introduced new, more lightweight monitoring functions.

I am not working in DB2 development anymore, but know that maintaining two sets of functions, especially in critical areas like memory management, can be a pain. Marking the old functions as deprecated in a fixpack seems like they will be gone sooner than later. What does it mean for you? Get ready to explore and exploit the new monitoring if you are not already on them. They also integrate well with workload management.

Ready for an IBM puzzle? Some links and a question...

Hey, are you ready for an interesting puzzle? In the following I will post some links to support articles and the Information Center and you tell me what I was up to...

This may not sound like fun, but all of us will learn and also improve patience. :)
 Okay, let's get started:
 So, what did I look for?

Friday, October 28, 2011

DB2 9.7 Fixpack 5 is available

The latest fixpack for DB2 9.7, FP5, has just been released. I took a look at the new features described in the Information Center. For the Express-C crowd the most important news is that PL/SQL can now be compiled, something that before was only available in the non-free editions of DB2. For the PL/SQL lovers also some new functions have been added to simplify migrations from Oracle to DB2.

Those working with HADR and looking for increased performance (and there is always a downside to it) can take a look at the super asynchronous sync mode. Another feature that I find interesting is a new procedure DESIGN_ADVISOR that can be used to get design recommendations from the server, so that you can write your own wrapper for the design advisor.

That's it for now, happy Friday and have a great weekend.

Thursday, October 27, 2011

Epilepsy, computer viruses, and personality changes

Did you ever encounter a computer virus on your machine and were thinking that it is no more "your computer"? What I mean is that the PC behaves in different, not known before patterns. It may slow down, react differently during your standard use and may seem weird in its behavior. Your computer encountered "personality changes".

As I wrote a couple of times, my son/the family has been dealing with epilepsy caused by a brain tumor. Fighting the cause for the seizures and damming them is one thing, coping with the changes to that person is another. A big impact on family and social life may have the personality changes caused by epilepsy. When your kid gradually slows down speaking, when he becomes more aggressive over time, when your child over time has more and more trouble remembering things, you know that you will have new challenges added to the core issues, the seizures.

For a computer virus there is usually a cure by cleaning the computer through different ways. For epilepsy damming the seizures through drugs or trying to remove the cause through surgery are first steps. Thereafter, dealing with the personality changes is a long road ahead. In the case of my son, we are hoping for the best. Next on the list will be to break free, not from Oracle :), but from medication...

Wednesday, October 26, 2011

IBM Data Studio Version 3.1 consolidates some Optim tools

Now that the new version of IBM Data Studio is available for download, it is worth taking a look at it - even if you didn't like Data Studio and the other Optim tools so far. IBM Data Studio is free to download and to use. And it combines features from the previous Data Studio offering, from Data Studio Health Monitor, and from the Optim Database Administrator and Optim Development Studio into a single products. Given that the DB2 Control Center is deprecated as of DB2 9.7, you can take your guesses why Data Studio has been beefed up.

So what does the new Data Studio offer? It has all the base database administration capabilities, all the "wizards" for accomplishing - step by step - administration tasks. It supports development of SQL statements and procedures, including the tuning, e.g., through visual explain. And Data Studio has lots of other features which may suit to your needs. As I stated above, give it a try and let IBM know through the usual channels what you think of it...

Wednesday, October 19, 2011

What is a smart phone?

Recently, my kids started pushing me to buy a new mobile phone for myself. Not that the kids really care about what I am buying, they want the old one as hand-me-down. They want to use the old mobile phone as gaming device. Anyway, as I am being pushed, I am looking into the market of smartphones. But what is a smart phone...?

Is it a phone that shuts down after enough work hours and tells me "family time"? Does it support "social integration", i.e., so that I talk with other people next to me and not just with folks on the phone (do you happen to enter rooms where everybody is talking, but only on the phone and not with each other)? Does it have an intelligent camera to enjoy a scenery live and not just as picture (there was a beautiful sunset over the Alps on a recent flight)? What happens when I press the "home" button? How well works the navigation assistance with a long shopping list in a crowded mall? And does it work without recharging for a whole week?

We'll see what phone I find. Any recommendations? My kids are pushing...

Monday, October 17, 2011

WOW for MDC on MQT - How to speed up the load process for a warehouse

When I was teaching a performance bootcamp earlier this month, one of the topics was about combining multi-dimensional clustering (MDC) with range-partitioned tables. Then a question came up about using MDC with materialized query tables (MQT) and whether it was possible to combine the two techniques. The customer hadn't succeeded before. As I didn't see a reason for why not combining the two I looked for documentation and examples: The DB2 Information Center describes how to combine MDC with MQT.

With the MQTs some of the heavy queries can be sped up by precomputing the answers for common complex subqueries. Using MDC for the MQTs can improve performance even further, depending on what can be clustered and how many dimensions are present.

I also briefly tested db2look and how it reproduces the DDL for it and the combination is supported by the tool, too.

For the customer it was a "Wow!" for the MDC on MQT, for the users it will mean faster queries...

Thursday, October 13, 2011

IDAA: IBM DB2 Analytics Accelerator announced

As you might have guessed from the date of my last post, I have been busy - actually I still am - with bootcamps and customers. Thus, today it will only point you to the announcement of the IBM DB2 Analytics Accelerator for z/OS. It is labeled Version 2.1 because V1 was named the IBM Smart Analytics Optimizer which I wrote about last year. For more information read the announcement.

Tuesday, September 27, 2011

Su casa es mi casa: Restore in DB2 and obtaining SECADM and other privileges

One of the changes from DB2 9.5 to DB2 9.7 was the enhanced security, including extended abilities for SECADM and less or changed for SYSADM and DBADM. The idea was to introduce more security and prevent data theft. However, as we have learned during life, all good comes with some drawbacks, and so it is for security as well. It reduces what an administrator can do and cuts down on flexibility (remember how taking a flight was more than a decade ago?).

A common problem is with taking backups of a system and try to use them, e.g., for testing, on a different system. Users like SECADM need to be recreated in order to make things work on the system using the restored database. And that's why a "shortcut" was introduced in DB2 9.7 FP2. If the DB2 registry variable DB2_RESTORE_GRANT_ADMIN_AUTHORITIES is set to ON, then SECADM, DBADM, DATAACCESS, and ACCESSCTRL authorities are granted to the user performing the RESTORE DATABASE, typically one of the system administrators. Different methods of restore are supported.

Sunday, September 25, 2011

High availability, epilepsy, functional MRI, and DB2 commands

In the past I had written a couple times about epilepsy and some computer-related aspects. Today, I want to point you to some more interesting aspects of your brain and high availability. As written earlier, one of my sons has epilepsy which was caused by a brain tumor. Last January, my son and I had a very interesting session in the hospital where a functional MRI (fMRI or fMRT) was performed. The reason has to do with the high availability of the brain.

What happens when you use DB2 with HADR or pureScale and one machine fails? If all is configured right and it indeed works, tasks should move over to one of the machines still up. When a child has epilepsy, depending on the type and seriousness, parts of the brain can get damaged. However, the brain is flexible and to some degree self-repairing. As a result, the functional "processors" of the impacted area can move to a different part of the brain. In DB2 you can monitor the HADR environment and query the state of the pureScale cluster: Who is primary, who is in peer state, on which machine are what services active? But how do you find out where in your brain the speech center is located (actually one of many)? The solution that helped in our case was to perform a functional MRI. It showed where important parts were located and whether it was safe later during surgery to operate in those areas where planned.

Conclusions: DB2 is simple to administrate compared to planning brain surgery.

BTW: Many hospitals use DB2 for patient records and much more.

Thursday, September 15, 2011

"Remember to flush" - Your options in DB2

When I hear the word "flush", I always have to think about the movie "The Man Who Knew Too Little" and the scene where Wallace/Spenser is told "Remember to flush".

In DB2 there are statements to flush the event monitors, to flush the package cache, and to flush the optimization profile cache. Well, with DB2 pureScale, the feature for application cluster transparency, you have one more option. It is for another cache, a big data cache. The new statement is FLUSH BUFFERPOOLS. It writes out all dirty pages from the buffer pools to disk. The less dirty data is buffered, the shorter the recovery time in the event of failures. Depending on your strategy and configuration for the page cleaners, this is a new statement to remember. Remember to flush...

Tuesday, September 13, 2011

Some DB2 commands for the pureScale feature that saved my box...

DB2 nanoCluster
Some days ago I told you about db2greg and how I used it to repair a DB2 pureScale demo cluster. Well, there are more commands that help to repair an inconsistent cluster, some are new DB2 commands. I will describe them in this article (so I can quickly find them later on).

Before I start let me tell you about the demo cluster which is nicknamed "DB2 nanoCluster". It is inexpensive hardware like Intel Atom CPUs, 4 GB of RAM and Gigabit Ethernet. The system uses SUSE Linux and DB2 9.8 (which is the pureScale feature). It consists of 3 machines, one used as storage node and two others hosting a DB2 member and a cluster caching facility each. It can be built for few hundred Euros or US Dollars and fits nicely into a box or between shelves (see the picture). Because of the (insufficient) hardware and the difference to a production system, there are also performance differences and, important, missing redundancy. If you pull 2 power cables, including the one on the storage box, expect to have problems. Pull an Ethernet cable, have fun as a single cable would be redundant connectivity to the storage server in a production system, multiple Infiniband adapters, and external network to the application cluster. Anyway, the nanoCluster and DB2 work nonetheless - at least in most cases, hence the repair commands...

What commands are useful to repair inconsistent configuration?

"db2iupdt -fixtopology" comes in handy when adding or removing cluster components resulted in a timeout and inconsistent state (someone pulling a cable by mistake?). It is used to repair the instance configuration.

"db2cluster -cm -repair -resources" can be used to fix inconsistencies between the db2nodes.cfg file and the cluster manager resource model (the RSCT layer).

Sometimes there are alerts by the DB2 members that need to be taken care of. "db2cluster -cm -list -alert" displays them, "db2cluster -cm -clear -alert" clears them.

The "db2instance -list" command shows the current status of the entire cluster.

That's it for today, my cluster is up and running. Do you want to build and operate your own DB2 cluster? Take a look at this page.

Monday, September 12, 2011

New Redbook: Security Functions of IBM DB2 10 for z/OS

A quick recommendation: Few days ago IBM published a new and very interesting Redbook titled "Security Functions of DB2 10 for z/OS". In the 450+ pages book the authors touch base and dig into all aspects of security, such as access control, cryptography, auditing and many more.

Friday, September 9, 2011

DB2 pureXML for the Protein Data Bank - Managing Atoms and Molecules with XML

Yesterday a new interesting article was published on developerWorks, "Managing the Protein Data Bank with DB2 pureXML". It describes how scientists with highly complex data (the Protein Data Bank), atoms and molecules that make up protein, can benefit from a highly sophisticated database management system, DB2.

At the core of the solution is pureXML to reduce the schema complexity (see the graphic for a relational design in the article). Compression is used to keep the database small and keep I/O to a minimum. Now add database partitioning across multiple nodes, range partitioning and multi-dimensional clustering to further help organize the data and to cut complexity and improve performance.

What a good example of combining many of DB2's features for the benefit of science.

BTW: This work was possible through the IBM Academic Initiative which as part of the benefits allows free use of DB2.

Friday, September 2, 2011

Things for the curious: db2greg

It's Friday and it is a slow day. How about I tell you about a DB2 tool that until 2 weeks ago I had never heard of and never had used before? It is a tool that has been in DB2 for an eternity (I found an entry in the Information Center for version 8). I am talking about db2greg which is used to view and change the DB2 global registry.

hloeser@BR857D67:~/Downloads$ db2greg -dump
S,DAS,,/opt/ibm/db2/V9.7/das,lib/,,4,, ,,

Using the "-dump" option as shown above lists the current entries, here for a DB2 9.7. But why would I have to use db2greg and how did I find out about it? The reason is DB2 9.8 which basically is the pureScale feature that brings application cluster transparency. By lack of coffee and sleep and too much enthusiasm I had pulled too many power cables at the same time on a demo machine (nanoCluster). That resulted in some "extra time" in bringing back the machine to "fully operational". In that process I had to clean up some system entries, e.g., like shown in this example in the Information Center.

You will notice in the example that some information DB2 needs to know about the GPFS and the RSCT clusters are stored in the global registry (PEER_DOMAIN, GPFS_CLUSTER, etc.). If parts of a system are manually (re-)build, the registry may become inconsistent and that's when db2greg options like "-delvarrec" and "-addvarrec" are needed to patch up the registry.

For me the mishap ended up with some extra work, but lots of new things learned. And remember, patch responsibly...

Tuesday, August 30, 2011

Summer (vacation) is over: Build up skills - DB2 Aktuell conference

My Summer vacation is over which basically has been reduced work hours, no blogging, and switching off the phone for some days. Most of us are back to work now and the questions is and was: Do we have enough skills to tackle the challenges ahead (or do we have the right team skills)?

Personally, I used some of the slow season to read and also "play" with technology. Attending conferences or classes to hear about new things and to discuss them with fellow technology workers is on my list for this Fall. Is it on yours? For those in Germany I would like to point to the DB2 Aktuell conference held in Bremen middle/end of September. It gives you latest news on DB2 for z/OS and DB2 for Linux, UNIX, and Windows, discussions with others, including DB2 developers, and DB2 tutorials and certifications. And you will have a chance to meet me, count that as added value... ;-)

Monday, July 25, 2011

How to find out that it is time for vacation

Just send out an email to a larger internal email list. If the only type of response you receive are out of office emails, shut down your computer and start vacation, too.

Happy Summertime!

Tuesday, July 19, 2011

A small update on updating XML data in DB2

Once you get started with processing XML data within a database, such as DB2, the next question usually is: How can I update XML documents? Well, (relational) database systems usually have an UPDATE statement for modifying data. In DB2, the same UPDATE can be used to change XML documents. The way it is done is to provide the new XML document which can be based on the previous version and then modified by applying expressions based on the XQuery Update Facility.

DB2 uses the so-called "transform expression" of that standard to let you describe (both SQL and XQuery are declarative query languages) how the new XML document should look like. Instead of providing an example here, I will give you the link to "Transform expression and updating expressions" in the DB2 Information Center. There you find plenty of examples to get you started.

The interesting aspect of the transform expression is that because it is an expression like any other in XQuery, you can combine it with the rest of XQuery and modify XML documents on the fly (e.g., in XMLQUERY or XMLTABLE), use it in "if-then-else" expressions to update documents conditionally, or come up with new ideas of how to use it.

Thursday, July 14, 2011

Energy consumption for another year in our passive house

A third year of keeping the tab of our energy consumption has passed. Luckily, even with changing weather and sometimes frosty days, the kilowatt hours that we used over the past 12 months stayed almost the same. I had reported about the previous year last July. So how what do the energy meters say this year?

  • Household consumption: 2605 kWh (2010: 2473 kWh, 2009: 2244 kWh)
  • Heating/ventilation/water: 2713 kWh (2010: 2858 kWh, 2009:  2782 kWh)
As you can see, we used slightly more electric energy in our household, but slightly less for heating and ventilation. Our kids are growing and we need to cook more, they stay up longer (needing electric light in the Winter), they started listening to music, etc. (yeah, blame the kids). On the other hand, my home office requires more electricity, but I travel more often which keeps a balance.

Let's see where we stand next year. I would be happy with small changes only as we have seen this year. What is your consumption?

Tuesday, July 12, 2011

Data inconsistency causing 30 minute flight delay

On Friday evening I was traveling back home. The last leg from FRA to FDH is with a regional jet which always is parked in an outside position and requiring a bus transfer from the terminal. I was happy because the bus was leaving the terminal 20 minutes before the departure time, usually a sign of on-time arrival (that's what counts). However, it turned out differently...

Once we arrived at the airplane, all the passengers boarded and took their seats. We then waited for the usual "boarding complete" and welcome message which didn't come. Instead, the flight attendants counted the passengers, re-counted, compared their passenger list with the seated crowd, and then got busy. At first, they asked the passengers in business class to show their boarding passes. After some more discussions, they went around in economy class and asking for specific passengers. Suddenly, at least for us boarded passengers, two more passengers were brought to the plane and also another car from operations arrived.

With some more discussions between crew, operations, and some passengers, the puzzle eventually got solved: It seemed that the crew was handed an outdated list (which they discovered) as there were some last-minute changes. Some of the new passengers arrived on time for boarding while some others on the first list arrived late. Hence we had a mix-up in the passenger count and with some names. All this caused a delayed departure of 30 minutes (and 20 minutes for arrival).

Monday, July 11, 2011

XQuery: Variable-based step in XPath expression

Recently I was asked how parts of an XPath expression can be passed into an XQuery in DB2. For performance, it is - of course - best to know all the steps when compiling a query. Let's look into the options by introducing an example:

Imagine a document where we have an element "greeting" inside either "b" or "c":

We want to search for either /a/b/greeting or /a/c/greeting and pass in the "b" or "c" as variable "qt" (query tag).

One option is to compose the entire statement inside the application by concatenating the query string, then executing it as dynamic SQL.

If the variable needs to be processed as part of a stored procedure, the query string could be composed inside the procedure, then prepared and executed using a cursor. The Information Center has a good example for such a procedure and XQuery for this second option.

A third option would be to process the variable as part of the XPath expression itself within XMLQUERY, XMLEXISTS, or XMLTABLE (XMLEXISTS shown):

XMLEXISTS('$DOC/a/*[local-name() eq $qt]/greeting' PASSING (cast ? as varchar(60)) as "qt")

We are using the XPath function fn:local-name() to access the element name and then comparing it against the passed in variable. Also note how we can use a parameter marker with our XQuery/XPath expression. The above works with namespaces, too. In that case use something like "/nsPrefix:*[local-name() eq $qt]/" in your XPath expression.

Wednesday, July 6, 2011

Daddy in the cloud vs. Daddy as a Service vs. Traditional Daddy

I am traveling right now and this morning, on one of the flights ("in the cloud") I was reading a longer article about cloud computing and its adoption in the SMB market (small and medium businesses). Then, suddenly, it dawned on me. I, as a daddy, can be compared to a business critical application.

As a "traditional daddy", I am home quite often. My wife and kids have direct access to me and can (more or less) control what's going on - all at a price. I need food (chocolate, coffee, etc.), I need some care, sometimes there are outages or my family runs into "defects" that I have. Anyway, they like this "traditional daddy" and local hosting.

As "daddy in the cloud" I am available over the Internet and over the phone. Sometimes, my kids are that busy that they don't care whether they talk with daddy in person or over the phone. "Daddy in the cloud" is also cheaper than the "traditional daddy" because IBM is paying for the food. My wife has to cook less, has to buy less. There is less usage of water at home. All is good, except "daddy in the cloud" does not have all the extras of "traditional daddy". There are also questions about who can listen in to chats (on the phone or the Internet). Sometimes there are issues with scalability (customer needs daddy, family needs daddy - but only one daddy is available). Is this still an exclusive daddy or can others gain access to it...?

What is needed is "daddy as a service" with a service level agreement (SLA). However, based on the requirements, it will be hard to meet the SLA given that I can't beam back and forth...

Thursday, June 30, 2011

The military, the space, the mainframe, and YOU

On Tuesday - while flying - I was reading the new edition of "Informatik Spektrum", the journal of the German Computer Science Association "Gesellschaft für Informatik", GI. One of the articles was meant as discussion starter and was about why and how to lecture mainframe skills at German universities. It described the key benefits of using a System z or mainframe and was showing how mainframes are spearheading technology evolution.

I was reminded of that article this morning again when reading this press release from last month, IBM validates Obsidian's Longbow for DB2 pureScale Stretch Clusters. I had written about a pureScale stretch cluster before, it is a geographically dispersed cluster with parts of the cluster at different locations. In the press release they talk about that the Longbows, the technology to bridge distances for Infiniband, has been developed for the military. We all know that geographically dispersed clusters have been available for years on the mainframe only (GD Parallel Sysplex).

The point I wanted to make is that it is interesting to see what technology trickles down the chain and eventually reaches "the regular guy". In this case it was high-end cluster and networking technology that became available in the "distributed database world". What will be next? Watch the mainframe or the stars...

Friday, June 17, 2011

DB2 Merge Backup: When some deltas make a full

I sometimes teach Data Management at university and one topic is backup strategies. We then discuss what is needed for a point-in-time recovery and what can be done to minimize the time needed for the recovery process. Full backups, incremental backups, delta backups, etc. are things to consider. Well, in a production environment having adequate maintenance windows to periodically take full backups, even online backups, could be a problem.

Some days ago DB2 Merge Backup become available. It combines incremental and delta backup to compute a full backup, so that taking such a full backup can be avoided. I just checked the product web page and a trial version is available. System requirements are DB2 LUW 9.5 or DB2 9.7 and it runs on most platforms.

Thursday, June 16, 2011

Friday, June 10, 2011

Friday Fun: What people drink on the airplane...

Earlier this week when flying back I had some spare minutes and actually read the fineprint on the back of my boarding pass. The section "Dangerous goods in passenger baggage" notes that, among others, poisonous substances and radioactive materials are prohibited. However, at the bottom it is pointed out that, among others, this rule does not apply to alcoholic drinks. Well, that explains the behavior and facial expression of some fellow travelers...

Wednesday, June 8, 2011

DB2: What does the error code mean?

Sometimes I run into a DB2 error that I don't know or there is a reason code given for which I don't know what it means. Of course, going to the DB2 Information Center and searching for, e.g., SQL5099N, is a solution. However, I am not always online or have a local copy of the Information Center available. But there is a simple solution close by: DB2 itself.

When you use the DB2 Command Line Processor (CLP), there is a small help section (try the "?"). Typing in the question mark and the error code, DB2 reveals the full error message including an explanation of reason codes and the so-called user response.

> db2 "? sql5099n"
SQL5099N  The value "" indicated by the database configuration
      parameter "" is not valid, reason code "".


The value of the named parameter is not valid for one of the following

The requested change is not made.

User response:

Resubmit the command with a valid value for the named parameter.

sqlcode: -5099

sqlstate: 08004

Life can be easy, even (especially?) when offline...

Tuesday, May 31, 2011

Several new Best Practices for DB2 papers on IBM developerWorks

As you probably know, IBM developerWorks has a section named Information Management Best Practices. It includes a collection of papers for some of the IM products, including best practices for DB2 for Linux, UNIX, and Windows and DB2 for z/OS best practices. It is good to have such a resource with a wealth of information. Even better is that material is added over time or new best practices added. Therefore, visit developerWorks and check out the new papers added over the past couple weeks.

Friday, May 27, 2011

Friday Fun: IBM Commercial from the past

Ever got stopped by someone from "help desk"...?

Memory leaks in software and epilepsy

EEG shows abnormal activity in some types of s...Image via Wikipedia
We all know what impact memory leaks have and how hard - at least sometimes - they are to detect. Often, there are small, but repeating leaks and over time they add up to either resource issues, weird application behavior or crashes.

Epilepsy can be similar. Sometimes, it can be spotted easily, sometimes it is hard to detect. In the case of my son, we didn't notice his condition for some years. There are many causes for epilepsy, e.g., in his case a brain tumor. It was constantly "nagging" the surrounding parts of his brain. Over time it added up and then caused (relatively mild) seizures. Initially they were seldom and infrequent. Every few weeks we spotted some short, but weird or strange acting of our son. When the pediatrician suspected epilepsy the first time and EEGs of different duration were ordered, nothing special was found. That's when a MRI was done and eventually the tumor was detected. Next on the list was a visit to an epilepsy monitoring unit (EMU) with week-long 24x7 EEG, video and audio capture. All the activity was necessary to track down and understand my son's epilepsy.

When you know (or think) that there is a memory leak inside your application, there are different ways to try to find it. These days, there are tools to assist. Sometimes, still the leak cannot be easily found or its impact easily seen - similar to epilepsy and what it is causing.

At IBM we use and recommend IBM Rational Purify to detect memory leaks in software. DB2 offers the db2mtrk tool to show you how memory is used and db2top can also show you such information.

Monday, May 23, 2011

Epilepsy: When computer science and medical science meet

Over the next couple of weeks I plan to write some blog posts dealing with epilepsy. One of my sons - maybe it is fair to say, the entire family - has been fighting epilepsy for some years now. We don't know when his epilepsy really started, it has been at least 2 years with visible impact. We expect that it is gone now because he had surgery about 3 weeks ago to remove a brain tumor which caused his epilepsy.

During our visits to doctors and clinics, by talking with other parents and meeting kids with various forms of epilepsy, and by reading we learned a lot. The more we learned, the clearer it got to me how close both computer science and epilepsy are. In computer science, e.g., for database systems, we care about cluster and high availability technologies. Epilepsy is a neurological disorder that impacts the CPU, the main board, and the I/O system that we humans utilize.

What I find interesting is that about one in every hundred has or had epilepsy and that there are more than 40 different kinds of epilepsy syndromes. Some can be dealt with (controlled) by drugs, some cured by surgery, but for many neither works. We learned that the human brain uses advanced technologies to guarantee high availability and to recover from failures. Often, single point of failure is avoided. For computer issues we call in service and have parts replaced. For us humans, however, we should be grateful when our core system is running "normal" - many don't share this fortune...

Monday, May 16, 2011

Overview: DB2 Base Tables, Created Temporary Tables and Declared Temporary Tables

One of the new features in DB2 9.7 are "Created Temporary Tables". Previously, users had to distinguish between the base tables (in all different forms and shapes) and declared temporary tables. So what are the commonalities and differences between the three different types of table that are now supported in DB2?

Fortunately, as in most cases, there is the DB2 Information Center. For exactly my question above there is an excellent overview (in table format!) that compares the three table types by feature category. Did you know that all support indexes on them or that you have to specify LOGGING for temporary tables in order to support UNDO operations?

For a reference of the syntax, see here:

Friday, May 13, 2011

How to stretch a pureScale cluster: Configuring geographically dispersed DB2 pureScale clusters

Recently, two very interesting new articles were published on developerWorks. One deals with building geographically dispersed DB2 pureScale clusters, the other has an in-depth look on how to upgrade from DB2 9.7 to the DB2 pureScale feature. In both papers, a detailed description of each step is included.

Geographically dispersed clusters, sometimes referred to as stretch clusters, span multiple locations to be able to continue with processing even when an entire site is down. When all involved sites are up, the processing power of the available sites is used (active/active).

Tuesday, April 19, 2011

DB2 9.7 FP4 is out: Trigger support has been enhanced and some other goodies

A new fixpack 4 is now available for DB2 9.7. An overview of new features and enhancements is on the Information Center. Many of the enhancements are designed to make migration from other database systems, namely Oracle, simpler, saving even more on migration costs.

What stands out from my perspective, are the enhancements to the trigger functionality. Now you can lump the definition of update, delete, and insert triggers together into a single DDL statement. Speaking of statement, support for statement triggers that fire only once per execution has been added to DB2's PL/SQL functionality, too.

Many of you will like (pun intended!) a new LIKE feature. It is now possible to use a column reference as pattern expression, i.e., to look up the actual pattern in a different table.

Note that the fixpack has already been upload and the Information Center been updated. However, it seems that the fix list overview page and the fixpack summary page still need to be updated.

Friday, April 15, 2011

New IBM Redpaper: Highly Available and Scalable Systems with IBM eX5 and DB2 pureScale

A new IBM Redpaper discussing DB2 pureScale on System x eX5 machines has been published this week. The paper has a high-level introduction to pureScale and its benefits and talks about the situation at a specific customer. It's not that deeply technical, but a good introduction and with some links to more resources.

My resource page has been updated

I keep a DB2 and pureXML Resources page as part of this blog. I added some more links and will continue with this as I find time.

Tuesday, April 12, 2011

Pure and free: Test pureXML on pureScale

You probably already know that there is a feature for XML data processing in DB2 which is called pureXML. You also know there is a continuous availability and scale-out feature for DB2 called pureScale. And I won't talk about pureQuery today. But did you know that pureXML is supported on pureScale? Did you know that you can request access to the so-called pureScale Acceleration Kits and try it out yourself (for free)?

I recently was instructor for a pureScale Workshop and one of the attending companies reported that they built their own small pureScale system. Such a mini system - we often refer to them as nanoClusters (or here) - has all the features of a real cluster, but only costs few hundred dollars/Euros in hardware and you have your own pureScale system. If you want to "go high-end", you could request a proof of concept/proof of technology at one of the IBM locations. But in either case, test drive  pureXML on pureScale...

Friday, April 8, 2011

Some more time, some XML functions

I had written about some details regarding current time and timezone for the regular DB2 and the Oracle mode. Now I had some time to play with built-in XQuery functions.

There are quite many that deal with time, date, timestamps, and timezones. For the XML and XQuery processing an implicit timezone of UTC (Coordinated Universal Time) is assumed. This is something to keep in mind when processing XML data - different semantics again (who said life is easy?). XQuery defines a good number of functions and operators on durations, dates, and times. DB2 supplements that with some more functions to make life simpler (not easy) in DB2. Most of them provide the local value, i.e., the one related to where your database server is located.

xquery current-dateTime()

Note the "Z" behind the timestamp, indicating Zulu (UTC) time, not necessarily the one of your location. DB2 adds its own function to deliver that:

xquery db2-fn:current-local-dateTime()

Now the "Z" is missing and we have a timestamp without a timezone. How about some experiments with timezones?

xquery db2-fn:local-timezone()

xquery db2-fn:current-local-dateTime() - current-dateTime()

With the first call we can obtain the timezone at our place. In my example it is the Central European Time with Daylight Savings being active. Not surprisingly, subtracting the (global) current time from the current local time, we receive the same difference in hours.

xquery adjust-dateTime-to-timezone(current-dateTime(),  db2-fn:local-timezone())

XQuery has a function to adjust a timestamp to a specific timezone. In the example above, we use the already known functions as input and the result gives another timestamp. The interesting part about is that now a timezone indicator is returned, "+02:00".

My time is up for today. If you have time, try out the other time-related functions in XQuery...

(Updated) Times are changing: DB2 vs. Oracle mode

If you expected a show off between the two database systems as part of this blog article, you will be disappointed. I only wanted to show you some - on first sight strange - behavior you can run into, based on whether you are using the regular DB2 date and timestamp semantics or the Oracle compatibility mode.

Let's start with a regular DB2 database:
db2 => values current timestamp


  1 record(s) selected.

db2 => values current timestamp - current timezone


  1 record(s) selected.

Now we switch to a database with the DB2_COMPATIBILITY_VECTOR set to ORA (and date_compat enabled):
db2 => values current timestamp


  1 record(s) selected.

db2 => values current timestamp - current timezone


  1 record(s) selected.

The year 1956? This looks strange. But when looking into documentation for date values in Oracle mode, we learn that we are operating on timestamp(0) semantics and that adding or subtracting values mean dealing with days. Be aware or you are turning the wheel of time faster than you imagined...

Update: I thought I should point out how I solved the puzzle. The following gives the same result, regardless of what mode you are working in.

db2 => values current timestamp - (current timezone / 10000) hours


  1 record(s) selected.

Friday, April 1, 2011

Get the hint: How to enable support for Oracle-style hints in DB2

One of the more frequently asked question of DBAs coming from Oracle to DB2 is about how optimizer hints work in DB2. The standard answer we give is that there are no hints of that kind in DB2 and that a cost-based optimizer is used. Keep your statistics up-to-date (and maybe use automatic runstats), try out different optimization levels, and maybe set some of the documented (and undocumented) registry variables.

Well, I just mentioned undocumented registry variables. When you run "db2set -lr",  you get all supported registry variables listed - about 190 in my installation of DB2 9.7. A list of many of them and some additional links is in the DB2 Information Center. When you go to the overview of performance variables, you will notice the variable DB2_EXTENDED_OPTIMIZATION. Now here comes the trick of enabling Oracle-style hints:


Make sure, it is really set. Calling just "db2set" should show the new value. Then try it out. For my test, I created two tables with about 100,000 rows. Then I executed my query the first without hints to measure the time and to look at the access plan.
Then I tried it with hints:

select /*+ Make this really fast, please */, a.desc, ...., b.colfoo, b.doc
from hlhints a, mytest b
where and b.colfoo LIKE '%!!argh!!%'

Just based on the measured time, I can say that it really works. Try it yourself.

Monday, March 14, 2011

Oracle compatibility in DB2: Some strings attached

One of the great features that came with DB2 9.7 is the so-called SQL compatibility. That is, DB2 supports a bigger part Oracle's SQL syntax and the related semantics. This is great when you want to "break free", i.e., move your applications from Oracle to DB2. Many DB2 developers have also looked into what of the compatibility features to use for their systems.

In this post, I want to point out the differences in string semantics depending on whether you are in Oracle or DB2 mode. Let's take a look at the regular first:

db2 => create table foo(id varchar(20) unique not null, num int)
DB20000I  The SQL command completed successfully.
db2 => insert into foo values('abc',1)
DB20000I  The SQL command completed successfully.
db2 => insert into foo values('abc  ',2)
DB20000I  The SQL command completed successfully.

When you switch to the Oracle mode by setting the DB2_COMPATIBILITY_VECTOR, you implicitly also switch to VARCHAR2 semantics. By that different semantics are applied to strings, including removal of trailing blanks for comparisons and empty strings converted to NULLs. How does that look like for our example (after setting the vector, restarting DB2, creating a different database)?

db2 => create table foo(id varchar(20) unique not null, num int)
DB20000I  The SQL command completed successfully.
db2 => insert into foo values('abc',1)
DB20000I  The SQL command completed successfully.
db2 => insert into foo values('abc  ',2)
DB21034E  The command was processed as an SQL statement because it was not a
valid Command Line Processor command.  During SQL processing it returned:
SQL0803N  One or more values in the INSERT statement, UPDATE statement, or
foreign key update caused by a DELETE statement are not valid because the
primary key, unique constraint or unique index identified by "1" constrains
table "HLOESER.FOO" from having duplicate values for the index key. 

Quite some different behavior than before and something to consider before enabling some of the compatibility features. For the XML crowd I want to point out, that XQuery string semantics are not impacted by the switch - they are different from both...

Tuesday, March 8, 2011

One day DB2 pureScale workshops in Ehningen (Stuttgart) and Paris coming up

Workshops covering DB2 pureScale are coming up on March 17th in Ehningen (Stuttgart area, Germany) and in Paris (France) on May 6th. They include presentations in the morning and hands-on training in the afternoon. For more information and to register, visit the DB2 pureScale workshop website.

Monday, February 14, 2011

Valentine's Day: How to date with DB2...

Today is Valentine's Day and according to Wikipedia an annual commemoration to "celebrating love and affection between intimate companions". This sounds like the typical DBA and database system relationship to me. What fits better on this day than looking into some of the data functionality DB2 has to offer.

Everybody should know who you are dating with. In case you don't know, here is an example to know more about your current date:

values(current date)


To find out what was going on yesterday, the following might come in handy:
values(current date - 1 day)


Many people wonder whether they and their date will match up. I always know when looking at the "date compatibility".

get db cfg
Date compatibility                                      = OFF

Well, this doesn't look too promising, but I know a way around. I can set the DB2_COMPATIBILITY_VECTOR and then look try again (requires instance restart and new database - this is life!).

get db cfg

Date compatibility                                      = ON

This looks much better and promising and I am sure that my (so far Oracle-based) application and DB2 are the perfect match. And because of the right financial details (licensing, maintenance, HR costs, ...) and properties like automatic maintenance (who would like that one in a household?), compression (are the cabinets and the attic crammed?), and performance they seem to move towards the "happily ever after" ending....

Tuesday, February 8, 2011

Power of attorney and trusted contexts

When you file a patent, you usually don't file a patent because someone else is doing the (paper)work for you. A similar thing can happen with other paperwork or places where you usually have to appear in person. In most cases you can delegate the work and sign a power of attorney (in German "Vollmacht"), i.e., something that grants the person mentioned in that paper the power to act or sign in behalf of you. The person acting can represent multiple other persons and he/she may switch roles during the process.

This reminded me of trusted contexts in DB2. When a trusted context is used that special user can act for others by switching the userid. As with legal matters, there are also some strings attached to trusted contexts and this overview page has gives you an introduction to what to expect.

BTW: This posted was written on behalf of Henrik.

Tuesday, February 1, 2011

Redbook published: Extremely pureXML in DB2 10 for z/OS

Now the IBM Redbook "Extremely pureXML in DB2 10 for z/OS" has made its way to the public. On more than 300 pages it covers all important topics for XML data processing in DB2 on the mainframe.

Friday, January 28, 2011

World Economic Forum, Parking Problems, and DB2 pureXML

Official logo of the World Economic Forum.Image via Wikipedia
The World Economic Forum in Davos certainly is interesting, at least from a perspective of logistics and data. A lot of the big and famous try to be there. And who has thought that parking is a problem?

When you want to fly into Davos, you usually fly into Zurich airport (ZRH) or the airport in Friedrichshafen (FDH). The latter is close to where I live and this morning it was still serving as a parking lot for small and big private jets from around the world. Some of the VIPs have been flying directly into FDH and changed into helicopters or cars, but a good chunk opted for the "fly to Zurich, park your jet in Friedrichshafen" option. For the average guy it seem funny that even with some money behind you get into parking problems. Well, not the regular ones...

How do I get the drift to database systems? You cannot just fly to an airport and hope for the best. There is some trip planning and logistics involved, and a lot of communication. These days, parking your vehicle needs some extra communication. The more efficient the better.

Now to DB2 and pureXML: A lot of applications send XML to the database or retrieve it from the database. So far it meant that the sender had to produce text-based XML data from its internal structures, the receiver had to parse it and produce its internal structures. Both takes time and costs money (we are living in a fast world). Starting with DB2 10 for z/OS, both the DB2 client and server use binary XML, a special optimized format for encoding XML documents. This allows to produce the "on the wire" representation faster and is also simpler to process on the receiving side. Communication made faster and more efficient, a better chance to find a parking space for your vehicle of choice...?

Monday, January 24, 2011

Why XML columns have a length of zero

One interesting question when looking at the meta data of tables is why XML columns don't have a length.

db2 "create table thebigq(id int, name varchar(40), doc xml inline length 1000)"
DB20000I  The SQL command completed successfully.

hloeser@BR857D67:~/1Q11$ db2 describe table thebigq

                                Data type                     Column
Column name   schema    Data type name Length     Scale Nulls
------------- --------- -------------- ---------- ----- ------
ID            SYSIBM    INTEGER                 4     0 Yes  
NAME          SYSIBM    VARCHAR                40     0 Yes  
DOC           SYSIBM    XML                     0     0 Yes  

  3 record(s) selected.

Well, for an INTEGER type the size is well-known because of the bits allocated to store the value. For VARCHAR strings the specified length (in bytes) is what is shown. For XML columns, however, there is no maximum size that you can specify. This is in contrast to LOBs (BLOBs, CLOBs). There is also no maximum size of an XML document on disk, it is only determined by storage and possibly the document structure.

When pureXML was designed, there was a long discussion on what the length as shown above should be. Should we just show a really big number, like 2GB, or decide that it is -1 or -2?

Inserting a 2GB big XML document - this is the maximum that can be transferred into DB2 - can take up more storage space than 2 GB or less. Hence, it is not a good value. The options "-1" or other negative numbers usually have a special meaning and therefore were also not chosen. So it became zero (0) - XML values have the size they have. Something mystic...!?!

Monday, January 17, 2011

Current DB2 Fixpacks

The year is still young, but I already had a business trip last week. When looking at/after your systems, planning new ones, or just teaching or discussing DB2, it is good to know where we are in current fixpack levels.

The page "DB2 Fix Packs by version for DB2 for Linux, UNIX, and Windows" lists all of them from version 8.2 to 9.8. As today, DB2 9.5 is at FP7 (released December 13th, 2010), DB2 9.7 at FP3a (released October 20th, 2010), and DB2 9.8 (this is the pureScale feature) is at FP3 (released December 17th, 2010). As you can see, two fixpacks for the recent DB2 versions came out just before the holidays.

Hello and some lessons from the holidays

I just wanted to say "Hello" and wish you the best for 2011. I am already deep into work and haven't had time (or the energy?) to post. However, I wanted to share some lessons from the holidays with you:

  • Think about the impact of your actions.
  • Time is valuable, don't waste it.
How come? I am owner of a discount card for the German Railway for my business travel. And railway company sent me an email wishing me happy holidays and in the subject even mentioning a gift for me. After opening the email (I love real gifts and I am curious), I read it and saw that the gift was a "some Euros off" coupon for my next travel. Then I read saw that - as usual - some footnotes were attached.

  • The coupon was only valid for December 24th to 31st. This was the first bummer.
  • It was only valid on certain trains. 2nd bummer.
  • It was only valid with a certain minimum travel value. 3rd bummer.
In other words: The "gift" was worthless to 99.5% of the recipients. Probably some marketing guy had an idea for a campaign to hand out gifts to our most valued customers and got it approved because it doesn't cost anything. Well, it cost something because if the railway company would just have wished a happy holiday, everything would have been fine. But by wasting their customers time and sending something worthless and overpromising, it left  disappointed recipients behind.