Showing posts sorted by relevance for query chocolate. Sort by date Show all posts
Showing posts sorted by relevance for query chocolate. Sort by date Show all posts

Tuesday, November 26, 2013

MySQL-style LIMIT and OFFSET in DB2

An "ancient" but not yet well-known feature in DB2 is the support of MySQL/PostgreSQL-style LIMIT and OFFSET in SELECT statements and searched UPDATE/DELETE. Unfortunately, it is not really documented and I am working on getting some more documentation added (think about the "mostly harmless" as in the Hitchhiker's Guide to the Galaxy).

To demonstrate what is possible, I created a small table full of good stuff:
db2 => create table sweets(id int, desc varchar(60), quant int)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(1,'dark chocolate',4)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(2,'marzipan bar',1)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(3,'almond cookies',10)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(4,'granola bar',1)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(5,'nut chocolate',1)
DB20000I  The SQL command completed successfully.
db2 => insert into sweets values(6,'chocolate-covered peanuts',1)
DB20000I  The SQL command completed successfully.
db2 => select * from sweets limit 5
SQL0104N  An unexpected token "limit" was found following "".  Expected tokens
may include:  "FETCH FIRST ROWS ONLY".  SQLSTATE=42601


By default, DB2 does not know about LIMIT and OFFSET either. But setting the DB2_COMPATIBILITY_VECTOR to MYS (as in MySQL) or "04000" enables the syntactic alternative to the FETCH FIRST n ROWS ONLY. After restarting DB2 the SELECT works:

db2 => select * from sweets limit 5

ID          DESC                                                QUANT
----------- --------------------------------------------------- -----------
          1 dark chocolate                                                4
          2 marzipan bar                                                  1
          3 almond cookies                                               10
          4 granola bar                                                   1
          5 nut chocolate                                                 1

  5 record(s) selected.


The following two examples show the use of the OFFSET, i.e., where to start in the result set. I combined it with ordering on the quantity.

db2 => select quant,desc from sweets order by num limit 3 offset 2

QUANT       DESC                                                       
----------- ------------------------------------------------------------
          1 nut chocolate                                              
          1 chocolate-covered peanuts                                  
          4 dark chocolate                                             

  3 record(s) selected.


db2 => select quant,desc from sweets order by num limit 2,3

QUANT       DESC                                                       
----------- ------------------------------------------------------------
          1 nut chocolate                                              
          1 chocolate-covered peanuts                                  
          4 dark chocolate                                             

  3 record(s) selected.


Note that in the second example a special abbreviated syntax is used. The first number is the offset where to start, followed by the number of rows to return.

As said, there is not much in the DB2 Information Center (yet). Serge has some more background in his blog about different ways of paging through result sets.

Friday, July 25, 2014

The Hunt for the Chocolate Thief (Part 2) - Putting IBM Bluemix, Cloudant, and a Raspberry Pi to good use

I am still on the hunt for the mean chocolate thief, kind of. In the first part I covered the side of the Raspberry Pi and uploading data to Cloudant. I showed how to set up an infrared motion sensor and a webcam with the RPi, capture a snapshot and secure the image and related metadata in a Cloudant database on the IBM Bluemix Platform-as-a-service (PaaS) offering. In this part I am going to create a small reporting website with Python, hosted as a IBM Bluemix service.

Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:

 import os  
 from flask import Flask,redirect  
 import urllib  
 import datetime  
 import json  
 import couchdb  
   
   
 app = Flask(__name__)  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 #get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   with open("cloudant.json") as confFile:  
    couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
    couchServer = couchInfo["credentials"]["url"]  
    couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 @app.route('/')  
 def index():  
   # build up result page  
   page='<title>Incidents</title>'  
   page +='<h1>Security Incidents</h1>'  
   
   # Gather information from database about which city was requested how many times  
   page += '<h3>Requests so far</h3>'  
   # We use an already created view  
   for row in db.view('incidents/incidents'):  
     page += 'Time: <a href="/incident/'+str(row.key["id"])+'">'+str(row.key["ts"])+'</a><br/>'  
   
   # finish the page structure and return it  
   return page  
   
 @app.route('/incident/<id>')  
 def incident(id):  
   # build up result page  
   page='<title>Incident Detail</title>'  
   page +='<h1>Security Incident Details</h1>'  
   doc=db.get(id)  
   # Gather information from database about the incident  
   page += '<br/>Incident at date/time:'+str(doc["timestamp"])  
   page += '<br/>reported by "'+doc["creater"]+'" at location "'+doc["location"]+'"'  
   page += '<br/>Photo taken:<br/><img src="/image/'+id+'" />'  
   # finish the page structure and return it  
   return page  
   
 @app.route('/image/<id>')  
 def image(id):  
   #redirecting the request to Cloudant for now, but should be hidden in the future  
   return redirect(couchServer+'/officecam/'+id+'/cam.jpg')    
     
   
 port = os.getenv('VCAP_APP_PORT', '5000')  
 if __name__ == "__main__":  
      app.run(host='0.0.0.0', port=int(port))  


Overview of Security Incidents
The setup phase includes reading in access data for the Cloudant database server. Either that information is taken from a Bluemix environment variable or provided in a file "cloudant.json" (similar to what I did on the RPi). The main part of the script defines three routes, i.e., how to react to certain URL requests. The index page (index()) returns an overview of all recorded incidents, an incident detail page (incident(id)) fetches the data for a single event and embeds the stored webcam image into the generated page, and the last route (image(id)) redirects the request to Cloudant.

 Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.

Incident Detail: Hadoop involved?
function(doc) {
    if (doc.type == "oc")
       emit({"ts" : doc.timestamp, "id" : doc._id}, 1);

}
Then I access the timestamp information and generate the list as shown in the screenshot above.

The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.

Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
Zeppelin Brief seen at robbery

Have a nice weekend (and remember to sign up for a free Bluemix account)!




Catching the mean chocolate thief with Raspberry Pi, Bluemix, and Cloudant


I always try to have some chocolate in my office, kind of as mood enhancer. But how to be sure that nobody else is going to plunder and pilfer my hidden treasures? So it was great that last week at the Developer Week conference in Nuremberg I got my hands on a Raspberry Pi (thank you, Franzis Verlag and Christian Immler) and that I know a little about IBM Bluemix. And here is the plan: Hook up my IBM-sponsored webcam to the RPi and then take, activated by a motion-sensor, a snapshot and upload the picture and metadata to a Cloudant NoSQL database. With a Bluemix-based application I could then have worldwide access to the "incident data" and catch the mean chocolate thief...

Raspberry Pi, motion sensor, and webcam
The first step is the hardware setup. Connecting the pins of infrared motion sensor to 5V, ground, and an IO port on the Rasperry, and then the webcam to the USB port is simple. The mini-computer already has LAN access which is important to access the Cloud services.

Next I logged into IBM Bluemix, the platform-as-a-service (PaaS) offering for developers and created a Cloudant data store. This is done similar to how I described it in my previous article on using Cloudant for some statistics for a weather webpage. The account data for the Cloudant database can be obtained in JSON format. I copied that information into a file "cloudant.json" and placed it into my project directory on the Raspberry Pi. With that, we are already at the software part of this project.

In the following, you see the Python script I used for the prototyping. It is performing some setup work which includes reading in the access information for the Cloudant account. The main part is a simple loop waiting for the thief to appear, i.e., the motion sensor to be actived:

 import datetime  
 import time  
 import subprocess  
 import RPi.GPIO as io  
 import json  
 import couchdb  
 io.setmode(io.BCM)  
   
 pir_pin = 18  
 scriptPath='/home/pi/projects/officeCam/takeSnap.sh'  
 imgFile='/home/pi/projects/officeCam/office.jpg'  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 with open("cloudant.json") as confFile:  
   couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
   couchServer=couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 io.setup(pir_pin, io.IN) # activate input  
   
 while True:  
   if io.input(pir_pin):  
     subprocess.call([scriptPath])  
     f=open(imgFile,'r')  
     # basic doc structure  
     doc= { "type" : "oc",  
        "creater" : "RPi",  
        "location" : "office",  
        "city" : "Friedrichshafen"  
       }  
     doc["timestamp"]=str(datetime.datetime.utcnow())  
     # and store the document  
     db.save (doc)  
     db.put_attachment(doc,f,filename='cam.jpg')  
     f.close()  
   
     print("Alarm processed")  
   time.sleep(1)  
   


Once some motion has been  detected, the Python script invokes a shell script. It is printed below. The only action is to execute the fswebcam program which takes a snapshot with the webcam. Thereafter, back in Python, I create a JSON document, stuff the current timestamp and some other information into it and store it to the Cloud-based NoSQL database. As last step I attach the picture to that document, so that even if the mean chocolate thief notices the trap, the image is secured in the cloud.

   
 #!/bin/sh  
 fswebcam -q -c /home/pi/projects/officeCam/fswebcam.conf  


With that I am done with the Raspberry Pi. What is left is to work on the reporting. See how it is done in Python on Bluemix and Cloudant.

Thursday, April 2, 2015

db2audit & syslog: Who Stole my Chocolate Easter Eggs?

Security Audit
Easter is coming up, a slow day due to vacation and Spring Break, time to try out some, in advance, of the chocolate Easter eggs. Gosh! They are gone! Who took them? It must have been someone with insight. Let's take a look at the audit logs...

When the DB2 10.5 Cancun Release (Fixpack 4) was announced I mentioned that db2audit records can be transferred to syslog now and I wanted to test it. The command db2audit is used to configure parts of the DB2 audit infrastructure, to archive audit logs, and to extract information from the archived logs. The "extract" option now features a destination "syslog" (from the command syntax):

Audit Extraction

   .-file--output-file---------------------------------------------------.   
|--+---------------------------------------------------------------------+-->
   +-delasc--+---------------------------+--+-----------------+----------+   
   |         '-delimiter--load-delimiter-'  '-to--delasc-path-'          |   
   '-syslog--facility.priority--+-----------+--+-----------------------+-'   
                                '-tag--word-'  '-splitrecordsize--byte-'     

While the option "file" would store the formatted audit logs in a regular text file, choosing "delasc" would split the log data across several delimited text files, ready for postprocessing in the database. The new option "syslog" can be used to hand over the audit data to the system logger facility. Depending which logger is used and how it is set up it could mean storing the audit records in local message files or sending them over to a central hub for analysis (e.g., by IBM Operations Analytics or Splunk).


DB2 Setup
In order to find the one trying to steal the Easter eggs the audit system would need to be active prior to any attempt. The DB2 audit infrastructure is started with "db2audit start", basic settings can be changed with "db2audit configure". For my tests I left everything set to failure-only logging and changed the archive path to "/tmp". Using the "describe" option, here is how the configuration looked like:


[hloeser@mymachine ~]$ db2audit describe
DB2 AUDIT SETTINGS:

Audit active: "TRUE "
Log audit events: "FAILURE"
Log checking events: "FAILURE"
Log object maintenance events: "FAILURE"
Log security maintenance events: "FAILURE"
Log system administrator events: "FAILURE"
Log validate events: "FAILURE"
Log context events: "FAILURE"
Return SQLCA on audit error: "FALSE "
Audit Data Path: ""
Audit Archive Path: "/tmp/"


It is also a good idea to use a buffer to hold audit records. The audit_buf_sz controls its size:
db2 update dbm cfg using audit_buf_sz 40

The next step in my setup was to create an audit policy in my test database:
create audit policy execfail categories execute status failure,checking status failure, context status failure error type normal

Creating a policy does not mean it is used. The AUDIT statement takes care of it:
audit sysadm,dbadm,dataaccess,user hloeser using policy execfail

Syslog Setup
The above concludes the DB2 portion of the test setup. Next is the optional step of telling the system logger where to place the received DB2 audit data. The DB2 Knowledge Center has some basic information about how to configure the system error and event log (syslog). Without any changes it is possible to dump the audit data to, e.g., "/var/log/messages". I wanted the records go to a separate file. Because my system has rsyslog installed, I needed to edit (as root) the file "/etc/rsyslog.conf". Adding the following line causes all "user"-related records to be written to "user_messages.log" in the directory "/var/log/db2":

user.*                            /var/log/db2/user_messages.log

It is important to create that directory and file (I used "mkdir" and "touch"), then to restart the syslog facility.


DB2 Audit Logs to Syslog
Once done with the setup I connected to my test database and executed several SQL statements, including a "select * from eastereggs" (a non-existing table). Then I deemed my system ready for moving a first batch of audit records over to syslog. If a buffer for the DB2 audit data is used, it needs to be flushed:
db2audit flush

Thereafter, all the current audit logs need to be archived. This can be done for both the instance and for databases. The following archives the logs for my test database and writes the file to the configured archive path (or the default path if none is specified):
db2audit archive database hltest

After all the configuration and preparation, we are finally at the really interesting part, the new extract option. Using "syslog" as destination and the category "user" with the priority level "info", the audit logs are handed over to the system error and event logger:
db2audit extract syslog user.info from files /tmp/db2audit.*

Did the logs really make its way over from DB2 to the system infrastructure? Here is my successful test:
[hloeser@mymachine ~]$ sudo grep -i easter /var/log/db2/user_messages.log Apr  2 13:32:10 mymachine db2audit: timestamp=2015-04-02-13.31.09.089507; category=CONTEXT; audit event=PREPARE; event correlator=40; database=HLTEST; userid=hloeser; authid=HLOESER; application id=*LOCAL.hloeser.150402095529; application name=db2bp; package schema=NULLID; package name=SQLC2K26; package section=201; text=select * from eastereggs; local transaction id=0x3266020000000000; global transaction id=0x0000000000000000000000000000000000000000; instance name=hloeser; hostname=mymachine;

Happy Easter and hopefully some chocolate eggs are left for you!

Wednesday, March 18, 2020

My best practices for home office - Corona edition

Take some rest
If you follow my blog, you may already know that I work from home since more than 12 years. Except for some business travel, I tend to do the "things" considered work from a dedicated room in my house in Germany. Over the past years, I tried to find the balance between being productive (team, customers, employer, ...), take care of my family, socialize where possible and staying sane and healthy. Here are some of my best practices. They may or may not work for you, but at least give you some inspiration.

Wednesday, January 20, 2016

The Cloud, Mood-Enhancing Substances, World Economic Forum, and More

DataWorks and Connect & Compose
Right now, the Winter sky is mostly covered by some low hanging clouds, giving way only for some random rays of sun. The past weeks I have been plagued by a cold which drew most of my energy. Now I am back, thanks to some mood-enhancing substances (a.k.a. lots of dark chocolate) and some rest. So what else, in addition to the usual World Economic Forum, is going on?

Monday, April 19, 2010

Happy eating: Separation of duties and DB2 security

One of the security enhancements that were delivered with DB2 9.7 is the full support for separation of duties. First of, what exactly is separation of duties? In short you are splitting up an important task or process in smaller steps and then require several people/user to complete the task. The objective is to prevent fraudulent or accidental activities and to reduce the risk and possible damage.

Combine that with auditing to keep track of who is/was doing what, encryption of data on disk plus encrypted DB2 connections. On top of that add regular roles and authorization for different tasks. Sometimes I wish I would have all that for that kitchen cabinet where my/our candy is stored.

With separation of duties my wife would be in charge of the cabinet, providing the space to store the candy (and sometimes clean up?). I would grant insert privilege to everyone because they could then add their candy to the cabinet and remove almost all other privileges. Secured connections would allow me to access (eat) without the kids eavesdropping on my chocolate consumption. Encrypted storage would make sure that even if someone would have access to the cabinet could not look inside and determine how much and what is left (my secret). With auditing I could keep track of who would try to gain access.

That would be happy eating for me. Henrik, dream on and share the candy till it becomes true...

Thursday, July 18, 2019

Rotate service credentials for Cloud Foundry apps on IBM Cloud

Keep your cloud environment secure
A popular PaaS (Platform as a service) technology is Cloud Foundry. On IBM Cloud, it is available both as public Cloud Foundry instances in multiple regions as well as Cloud Foundry Enterprise Environment (CFEE). Using Cloud Foundry for app development and production has many benefits. Personally, I like the ease of use and how services can be integrated with apps. When you search for my older Bluemix-related blog entries, you will notice that I had (and still have) fun using Cloud Foundry, including my attempts to catch a chocolate thief or testing the hybrid world early on. Similar to my recent blog on rotating credentials when working with IBM Cloud Functions, today, I want to discuss how to bind services and rotate their credentials in a Cloud Foundry environment.

Thursday, January 9, 2014

!!!STOP!!! Birthday Party for 5 Years of Blogging (Your participation needed)

Five years ago, on January 9th 2009, I started this blog. Time to look back and to
by John Hritz, CC-BY-2.0
celebrate. But also time to look forward. And I need your help with both. Please continue reading, 5 minutes are needed.

In late 2008 I was looking for an easy way to share tips&tricks about DB2. Over the holidays I thought about trying out "blogging" and started it in January 2009. And now I can't believe that 5 years passed already. Time to celebrate: Some extra chocolate for me today and a big THANK YOU to you for reading what I write.

As part of the celebration I am looking for some gifts, i.e. your feedback:
Please send me an email to "hloeser" at the domain "de.ibm.com" with a small note about what you like in the blog.
  • Did it help you with some specific aspects of DB2, like migration from Oracle, XML processing, taming the beast...?
  • Are you reading this blog because grammar my sometimes funny it looks?
  • Do you like the articles labeled "fun"?
  • Did you read my now "dated" articles on April Fools Days?
  • Did you try to solve all the quizzes?
  • Did you come to my blog for the series on epilepsy?
  • Did you come here by mistake after an Internet search?
  • Anything else?
And what do you want to read in the future? Again, please celebrate with me and send a quick email with some feedback. If you want to stay anonymous, please leave a comment.

Tuesday, March 2, 2010

Traveling too often? Dealing with not so easy customers? Try this...

If you are traveling too often or, of course because of the economy, have to cut down your budget for roses and chocolate, here is something to try. It doesn't say whether you can change the message using a USB plug or whether it is useful to deal with customers that need that little extra attention. Anyway, try it and let me know...

Wednesday, July 6, 2011

Daddy in the cloud vs. Daddy as a Service vs. Traditional Daddy

I am traveling right now and this morning, on one of the flights ("in the cloud") I was reading a longer article about cloud computing and its adoption in the SMB market (small and medium businesses). Then, suddenly, it dawned on me. I, as a daddy, can be compared to a business critical application.

As a "traditional daddy", I am home quite often. My wife and kids have direct access to me and can (more or less) control what's going on - all at a price. I need food (chocolate, coffee, etc.), I need some care, sometimes there are outages or my family runs into "defects" that I have. Anyway, they like this "traditional daddy" and local hosting.

As "daddy in the cloud" I am available over the Internet and over the phone. Sometimes, my kids are that busy that they don't care whether they talk with daddy in person or over the phone. "Daddy in the cloud" is also cheaper than the "traditional daddy" because IBM is paying for the food. My wife has to cook less, has to buy less. There is less usage of water at home. All is good, except "daddy in the cloud" does not have all the extras of "traditional daddy". There are also questions about who can listen in to chats (on the phone or the Internet). Sometimes there are issues with scalability (customer needs daddy, family needs daddy - but only one daddy is available). Is this still an exclusive daddy or can others gain access to it...?

What is needed is "daddy as a service" with a service level agreement (SLA). However, based on the requirements, it will be hard to meet the SLA given that I can't beam back and forth...