Thursday, November 20, 2014

Useful DB2 administrative functions and views

Did you know that there are about 80 (eight-zero) administrative views in the SYSIBMADM schema in DB2 that are ready for use? I have used several of them and also looked into the documentation, but 80 is quite a lot. (Almost) All of them are documented in the DB2 Knowledge Center in the "Built-in routines and views" section.

The routines live in the SYSPROC schema, administrative views can be found in the schema SYSIBMADM. Given that insight it is easy to construct a simple query to find all available views:

SELECT viewname from syscat.views where viewschema='SYSIBMADM'

Depending on your version and fixpack level of DB2 the result will vary. Speaking of fixpack level, do you know how to find out what your system is running by using SQL? The view ENV_INST_INFO may help in that case because it returns instance-related information such as the instance name, the DB2 version, fixpack, and build level:

SELECT * FROM SYSIBMADM.ENV_INST_INFO

Are you connected to, e.g., an Advanced Workgroup Server Edition (AWSE) of DB2 or an Enterprise Server Edition (ESE)? Find out by querying the product information using the view ENV_PROD_INFO. It returns the installed product, the kind of active licenses, and more:

SELECT * FROM SYSIBMADM.ENV_PROD_INFO

Next in the list of useful views with system information is ENV_SYS_INFO. It can be utilized to find out more about the operating system, the type of hardware, installed CPU and memory, etc.:

SELECT * from SYSIBMADM.ENV_SYS_INFO

Last, but not least in my list of views with basic system information are DBMCFG and DBCFG. As the name implies can these views help to retrieve the current instance (database manager / dbm) or the current database (db) configuration. So it is easy to find out whether the self-tuning memory manager (STMM) is active or where diagnostic logs are stored.

That's it for today, I am back to playing with more of those views (and routines)...

Monday, November 17, 2014

A quick look at dashDB and a happy SQuirreL

dashDB slogan on its website
This morning I took some time to take a look at dashDB, a new IBM DWaaS (Data Warehouse as a Service) offering. When you go to the dashDB website, you are offered two choices: Use the dashDB service available on IBM Bluemix or use a Cloudant account to add a warehouse to your JSON database. Let me give you a brief overview of what you can do with dashDB and how I connected a local (open source) SQuirreL SQL client to my new dashDB database.


Cloudant Warehousing (dashDB)
dashDB is a cloud-based analytics database ("analytics in a dash")) with roots in Netezza and DB2 with BLU Acceleration. Data is stored in table (rows and columns) format. It is ready to connect all kinds analytic tools, local or cloud-based, and is already set up for geo-spatial data analysis (instructions on how to use the ESRI ArcGIS Desktop are provided). The best is that your regular SQL database/analytic tools continue to work, see below for details.

dashDB: schema discovery
I started my journey by logging into my existing Cloudant account. There, on the dashboard menu is a new item "Warehousing". When clicking on the "New Warehouse" button, you can select the Cloudant databases that you want to import into the warehouse. Because multiple databases can be associated with a Cloudant account or a Bluemix Cloudant service, this step let's you pick the data of choice. After the source data is chosen, the dashDB database is created and so-called schema discovery turns the JSON documents into rows of tables. Thereafter, the data is ready to have analytics applied. That is the time to launch the dashDB control center, another so-called "dashboard".

The welcome screen shows some of the analytic options, e.g., the database is ready to be used with either Cognos, SPSS, InfoSphere DataStage, R scripts, or all of them and more:
Analytis for dashDB: Cognos, SPSS, DataStage, R
SQuirrel SQL client - dashDB connected
Because some time ago I already tested and blogged about a predecessor of dashDB (see here: how to set it up and how to use R), I was more interested in trying out a JDBC-based client with my new cloud-based data warehouse. Included as part of the dashboard are several sections that help you with the application setup. So it was easy for me to obtain the JDBC URL and configure it and the listed userid/password in my local SQuirrel SQL client (it will work in IBM Data Studio and the Optim tool, too). As you can see from the screenshot, the database connection from my laptop to the cloud-based dashDB succeeded. Ready for some SQL.

My lessons learned from testing database queries on the converted data (JSON to relational) will be part of another blog entry. Stay tuned...