Showing posts with label Life. Show all posts
Showing posts with label Life. Show all posts

Wednesday, May 20, 2020

Use Chromium-based browsers to manage FIDO security keys

Add fingerprints using browser
Add fingerprint to FIDO key
Recently, I made a discovery that simplified how I manage my FIDO security keys. Instead of using a vendor tool to set a PIN or add fingerprints, I now switched to utilizing a Chrome- / Chromium-based browser for the administration. This works well on my Linux box. In this blog post, I am going to detail some of the features available.

Monday, May 18, 2020

Some advanced SQL to analyze COVID-19 data

Learn to write SQL
All of us are impacted by COVID-19. Looking at daily case numbers, the basic reproduction number or mortality rates has become a routine. But what is behind those numbers? How are they computed and how does SQL help? In this post, I want to discuss how some SQL analytics clauses help to dig through the data. When I teach database systems, I always try to show how it applies to life. Here, SQL may not directly help to cure the disease, but SQL is essential to compute and understand the statistics.

Tuesday, April 14, 2020

Home office and rubber duck debugging, 5 levels

Rubber duck debugging at home
Recently, I shared with you my best practices for working from home. Today, I want to add an angle specific to technical jobs, especially for developers. When you work in co-located teams, you benefit from the direct conversations and exchange. In (software) development and technical writing, it is important to rethink ongoing processes or ideas, to reassess a situation, to explain what you do. Rubber duck debugging is one such method - debugging code by explaining it to a rubber duck.

Wednesday, March 18, 2020

My best practices for home office - Corona edition

Take some rest
If you follow my blog, you may already know that I work from home since more than 12 years. Except for some business travel, I tend to do the "things" considered work from a dedicated room in my house in Germany. Over the past years, I tried to find the balance between being productive (team, customers, employer, ...), take care of my family, socialize where possible and staying sane and healthy. Here are some of my best practices. They may or may not work for you, but at least give you some inspiration.

Friday, February 28, 2020

Swashbooking for crowd-sourced book reviews and fun

Books for review
Usually, I don't go to book clubs or write book reviews. But yesterday evening was different with my first swashbooking session (German: Buchstrudeln). It is fast-paced book skimming and crowd-sourced book review combined. And a lot of fun. So what is it and what really did we do? Read on...

Wednesday, October 2, 2019

Trip report: Sustainability management and reporting


UN Sustainable Development Goals

Last Friday, I attended the annual conference of the Bodensee Innovation Cluster for digital change (changes due to digitalization). The conference had several interesting talks and included workshops. Let me give you a quick overview of the innovation cluster, then delve into the sustainability topic which was part of the conference.

Monday, February 25, 2019

Digital ethics, trusted AI and IBM

Last week I gave a talk followed by a discussion at a university. The presentation was about the current state of Artificial Intelligence (AI) and AI research topics. A good chunk of the discussion was dedicated to fairness, trust and digital ethics. In the following, I am sharing some of the related links.

IBM Research has a site dedicated to  AI. On that, a section provides insight into topics on what they call Trusted AI. On the main IBM site is also a portal Trusted AI for Business, providing an introduction and overview for the non-research crowd. If you are interested and want to try out and learn about few problems hands-on, I recommend these links:

IBM experts are part of many public panels, workgroups and commissions. In Germany, there is the Enquete-Kommission "Künstliche Intelligenz - Gesellschaftliche Verantwortung und wirtschafliche, soziale und ökologische Potenziale". On the European level, it is the EU High-Level Expert Group on Artificial Intelligence.

Finally, as a showcase of current AI capabilities, I recommend this video of IBM Project Debater and the live debate at Think 2019. A short video explains how Project Debater works:


If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Friday, February 8, 2019

Startup lessons from a Fuckup Night

Last Wednesday, I attended the Fuckup Night Friedrichshafen Vol. II. If you don't know, Fuckup Nights is a global movement and event series dedicated to professional failures. That is, usually founders of failed startups tell their stories. Typically, it is a mix of funny adventures into the world of business, some sad parts and most importantly some lessons learned. So what were the lessons I took away? Read on...

Friday, November 16, 2018

Incorporate Git and IBM Cloud information into BASH command prompt

Many of you know that I love to use the command line. Because my day to day work includes interfacing IBM Cloud and GitHub, I have changed the BASH configuration to include related information into the command prompt. Here is what I did and how it looks like.

Saturday, October 6, 2018

Impressions from Zeppelin flight

Zeppelin flight
Recently, I had the opportunity to fly on a Zeppelin NT, the kind of Zeppelin I had blogged about before. The 6+ hour flight was a once in a lifetime opportunity because it could not be booked. It took as from Bonn Hangelar (EDKB) to Friedrichshafen (EDNY). Our journey started with a detour over Cologne, then following the Rhine up to Karlsruhe, taking a turn to Stuttgart and from there down south to Lake Constance (see the rough route we took on the right).

Thursday, February 23, 2017

Location and Intent Matter: Data Privacy vs. US Government

Data is locked away from the US authorities
Some data is locked away
Earlier this month and last Summer two interesting cases related to data privacy were decided. Both concern US search warrants for email data stored outside the United States of America. In July 2016 the United States Court of Appeals for the Second Circuit ruled that Microsoft does not need to hand over email data stored in Ireland. This February, the United States District Court for the Eastern District of Pennsylvania decided that Google must produce the emails which were stored outside the USA. The last case is not finally decided because Google plans to appeal the ruling. Independent of that, what is the take-away from these rulings? Let's take a look.

Thursday, November 24, 2016

Stuff - The Day of the BLOB and Object Storage

Regardless of whether it is turkey, cranberry sauce, stuffing, gravy, sweet potatoe pie, mashed potatoes or more that you eat, independent of whether it is a new iPhone, tablet, big screen, Bluetooth soundbar, household robot or other gadget on sale,  good to know that you can stuff almost anything into a DB2 BLOB or into the Bluemix Object Storage or Block Storage service.

In that sense "Happy Thanksgiving"! I am currently looking into the Content Delivery Network service to get my stuff faster to my folks. Talking about "stuff", enjoy this classic on "stuff" and "storage":


Wednesday, January 20, 2016

The Cloud, Mood-Enhancing Substances, World Economic Forum, and More

DataWorks and Connect & Compose
Right now, the Winter sky is mostly covered by some low hanging clouds, giving way only for some random rays of sun. The past weeks I have been plagued by a cold which drew most of my energy. Now I am back, thanks to some mood-enhancing substances (a.k.a. lots of dark chocolate) and some rest. So what else, in addition to the usual World Economic Forum, is going on?

Tuesday, October 7, 2014

Starvation: Electronic books, DRM, the local library, and database locks

Over the past days I ran into an interesting database problem. It boils down to resource management and database locks. One of my sons is an avid reader and thus we have an ongoing flow of hardcopy and electronic books, most of them provided by the local public library (THANK YOU!). Recently, my son used the electronic library to place a reservation on a hard-to-get ebook. Yesterday, he received the email that the book was available exclusively to him (intention lock) and to be checked out within 48 hours (placing the exclusive lock). And so my problems began...
Trouble lending an ebook

There is a hard limit on the maximum number of checked out ebooks per account. All electronic books are lent for 14 days without a way to return them earlier because of Digital Rights Management (DRM). If the account is maxed out, lending a reserved book does not work. Pure (teenage) frustration. However, there is an exclusive lock on the book copy and nobody else can lend it either, making the book harder to get and (seemingly) even more popular. As consequence more reservation requests are placed, making the book even harder to lend. In database theory this is called starvation effect or resource starvation. My advise of "read something else" is not considered a solution.

How could this software problem be solved? A change to DRM to allow earlier returns seems to be too complex. As there is also a low limit for open reservation requests per account, temporarily bumping up the number of books that can be lent per account would both solve the starvation effect and enhance the usability. It would even increase the throughput (average books out to readers), would reduce lock waits (trying to read a certain book), and customer feedback.

BTW: The locklist configuration in DB2 (similar to the number of books lent per account) is adapted automatically by the Self Tuning Memory Manager (STMM), for easy of use, for great user/customer feedback.

Friday, July 25, 2014

The Hunt for the Chocolate Thief (Part 2) - Putting IBM Bluemix, Cloudant, and a Raspberry Pi to good use

I am still on the hunt for the mean chocolate thief, kind of. In the first part I covered the side of the Raspberry Pi and uploading data to Cloudant. I showed how to set up an infrared motion sensor and a webcam with the RPi, capture a snapshot and secure the image and related metadata in a Cloudant database on the IBM Bluemix Platform-as-a-service (PaaS) offering. In this part I am going to create a small reporting website with Python, hosted as a IBM Bluemix service.

Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:

 import os  
 from flask import Flask,redirect  
 import urllib  
 import datetime  
 import json  
 import couchdb  
   
   
 app = Flask(__name__)  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 #get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   with open("cloudant.json") as confFile:  
    couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
    couchServer = couchInfo["credentials"]["url"]  
    couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 @app.route('/')  
 def index():  
   # build up result page  
   page='<title>Incidents</title>'  
   page +='<h1>Security Incidents</h1>'  
   
   # Gather information from database about which city was requested how many times  
   page += '<h3>Requests so far</h3>'  
   # We use an already created view  
   for row in db.view('incidents/incidents'):  
     page += 'Time: <a href="/incident/'+str(row.key["id"])+'">'+str(row.key["ts"])+'</a><br/>'  
   
   # finish the page structure and return it  
   return page  
   
 @app.route('/incident/<id>')  
 def incident(id):  
   # build up result page  
   page='<title>Incident Detail</title>'  
   page +='<h1>Security Incident Details</h1>'  
   doc=db.get(id)  
   # Gather information from database about the incident  
   page += '<br/>Incident at date/time:'+str(doc["timestamp"])  
   page += '<br/>reported by "'+doc["creater"]+'" at location "'+doc["location"]+'"'  
   page += '<br/>Photo taken:<br/><img src="/image/'+id+'" />'  
   # finish the page structure and return it  
   return page  
   
 @app.route('/image/<id>')  
 def image(id):  
   #redirecting the request to Cloudant for now, but should be hidden in the future  
   return redirect(couchServer+'/officecam/'+id+'/cam.jpg')    
     
   
 port = os.getenv('VCAP_APP_PORT', '5000')  
 if __name__ == "__main__":  
      app.run(host='0.0.0.0', port=int(port))  


Overview of Security Incidents
The setup phase includes reading in access data for the Cloudant database server. Either that information is taken from a Bluemix environment variable or provided in a file "cloudant.json" (similar to what I did on the RPi). The main part of the script defines three routes, i.e., how to react to certain URL requests. The index page (index()) returns an overview of all recorded incidents, an incident detail page (incident(id)) fetches the data for a single event and embeds the stored webcam image into the generated page, and the last route (image(id)) redirects the request to Cloudant.

 Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.

Incident Detail: Hadoop involved?
function(doc) {
    if (doc.type == "oc")
       emit({"ts" : doc.timestamp, "id" : doc._id}, 1);

}
Then I access the timestamp information and generate the list as shown in the screenshot above.

The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.

Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
Zeppelin Brief seen at robbery

Have a nice weekend (and remember to sign up for a free Bluemix account)!




Catching the mean chocolate thief with Raspberry Pi, Bluemix, and Cloudant


I always try to have some chocolate in my office, kind of as mood enhancer. But how to be sure that nobody else is going to plunder and pilfer my hidden treasures? So it was great that last week at the Developer Week conference in Nuremberg I got my hands on a Raspberry Pi (thank you, Franzis Verlag and Christian Immler) and that I know a little about IBM Bluemix. And here is the plan: Hook up my IBM-sponsored webcam to the RPi and then take, activated by a motion-sensor, a snapshot and upload the picture and metadata to a Cloudant NoSQL database. With a Bluemix-based application I could then have worldwide access to the "incident data" and catch the mean chocolate thief...

Raspberry Pi, motion sensor, and webcam
The first step is the hardware setup. Connecting the pins of infrared motion sensor to 5V, ground, and an IO port on the Rasperry, and then the webcam to the USB port is simple. The mini-computer already has LAN access which is important to access the Cloud services.

Next I logged into IBM Bluemix, the platform-as-a-service (PaaS) offering for developers and created a Cloudant data store. This is done similar to how I described it in my previous article on using Cloudant for some statistics for a weather webpage. The account data for the Cloudant database can be obtained in JSON format. I copied that information into a file "cloudant.json" and placed it into my project directory on the Raspberry Pi. With that, we are already at the software part of this project.

In the following, you see the Python script I used for the prototyping. It is performing some setup work which includes reading in the access information for the Cloudant account. The main part is a simple loop waiting for the thief to appear, i.e., the motion sensor to be actived:

 import datetime  
 import time  
 import subprocess  
 import RPi.GPIO as io  
 import json  
 import couchdb  
 io.setmode(io.BCM)  
   
 pir_pin = 18  
 scriptPath='/home/pi/projects/officeCam/takeSnap.sh'  
 imgFile='/home/pi/projects/officeCam/office.jpg'  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 with open("cloudant.json") as confFile:  
   couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
   couchServer=couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 io.setup(pir_pin, io.IN) # activate input  
   
 while True:  
   if io.input(pir_pin):  
     subprocess.call([scriptPath])  
     f=open(imgFile,'r')  
     # basic doc structure  
     doc= { "type" : "oc",  
        "creater" : "RPi",  
        "location" : "office",  
        "city" : "Friedrichshafen"  
       }  
     doc["timestamp"]=str(datetime.datetime.utcnow())  
     # and store the document  
     db.save (doc)  
     db.put_attachment(doc,f,filename='cam.jpg')  
     f.close()  
   
     print("Alarm processed")  
   time.sleep(1)  
   


Once some motion has been  detected, the Python script invokes a shell script. It is printed below. The only action is to execute the fswebcam program which takes a snapshot with the webcam. Thereafter, back in Python, I create a JSON document, stuff the current timestamp and some other information into it and store it to the Cloud-based NoSQL database. As last step I attach the picture to that document, so that even if the mean chocolate thief notices the trap, the image is secured in the cloud.

   
 #!/bin/sh  
 fswebcam -q -c /home/pi/projects/officeCam/fswebcam.conf  


With that I am done with the Raspberry Pi. What is left is to work on the reporting. See how it is done in Python on Bluemix and Cloudant.

Tuesday, June 24, 2014

Why we need and have workload management

Wikipedia
While working in your office a rare visitor from another location stops by. Time for a break to connect on the latest gossip, but not too long. On the way back to your office your boss asks you to call someone from the client team to clarify some technical issues and you have to squeeze it in between two important customer calls. And you just received a text message that your wife cannot pick up the kids and you need to leave on time this afternoon to do it. Workload Management (WLM) in real life. Everybody seems to be doing WLM, some better, some not so well. And there are many unwritten rules.

In a database system like DB2 there is also a built-in Workload Management. If you are using BLU Acceleration, it is activated by default and some rules have been defined, else it is switched off. Why turn it on and use it? Same reasons as in real life:
  • A "fair" allocation of time and resources between different work items/applications is needed ("work / life balancing"?).
  • Response time for critical tasks or some type of work is important and needs to be protected against less important tasks ("your mother-in-law visits, take care of her").
  • Implementation of rules to control and regulate the system behavior ("kids in bed by 8pm means time for you to watch soccer").
  • Deal with rogue queries that threaten regular operations ("kids bring over the entire neighborhood").
  • The system (sometimes) is overloaded and you have to set priorities ("no party this weekend").
All this can be done with the DB2 Workload Manager. It allows to identify different types of activities (work), manage them based on rules that govern available resources and set controls, and to monitor the system behavior. The database workload manager can be integrated with the operating system (OS) workload manager on AIX and Linux. This is especially useful when more than a single database is active and resources need to be controlled on a higher level ("sync your family calendar with the grandparents").

Does Workload Management help? Yes, it does. However, similar to family life it is possible that because of resource shortage not all planned tasks can be performed. Maybe time for an upgrade ("hire some help, do not get more kids... :)").

I plan to discuss DB2 WLM details in future articles, workload permitting...

Monday, March 17, 2014

From Lake Constance with love: A new Goodyear Blimp


A new Zeppelin (Zeppelin NT with NT as in "New Technology"), the next generation of Goodyear Blimps, it scheduled for its first flight today. It is the first of three. The components have been built in my current home town Friedrichshafen and been shipped to Goodyear to Ohio. There, the semi-rigid airship has been assembled.

BTW: Zeppelin flights in Germany can be booked at Zeppelinflug and you can learn more about the Zeppelin history at the Zeppelin Museum in Friedrichshafen.



Saturday, February 8, 2014

Family life and DB2 BLU

Imagine that you had to search for a cooking pot within your house. Where would you start and search first? Most people would focus on the kitchen. Where would you look for some toothpaste? Most probably in the bathroom and maybe in the room where you just put the bag from your shopping trip.

Using the context information speeds up search, you are only considering some places and avoid searching the entire house. This is data skipping in normal life. DB2 with BLU Acceleration uses a synopsis table to provide the context information. By avoiding work less resources are needed, less data needs to be processed and you have the result much faster.

Now imagine that the cabinets are labeled, the kids would have cleaned up their room with clothes nicely folded and small junk sorted into plastic containers. In DB2 BLU this would be called "scan-friendly". Some people use "space bags", plastic wraps that can be vacuumed to reduce the storage size of clothes, pillows, etc. Because you can still see what is inside and handle it like everything else, it is "actionable compression" - same as in DB2 BLU which can operate on compressed data.

Now if I could create an analogy how DB2 BLU does the dishes - something I have to do now. Household chores. Enjoy the weekend!


Thursday, January 9, 2014

!!!STOP!!! Birthday Party for 5 Years of Blogging (Your participation needed)

Five years ago, on January 9th 2009, I started this blog. Time to look back and to
by John Hritz, CC-BY-2.0
celebrate. But also time to look forward. And I need your help with both. Please continue reading, 5 minutes are needed.

In late 2008 I was looking for an easy way to share tips&tricks about DB2. Over the holidays I thought about trying out "blogging" and started it in January 2009. And now I can't believe that 5 years passed already. Time to celebrate: Some extra chocolate for me today and a big THANK YOU to you for reading what I write.

As part of the celebration I am looking for some gifts, i.e. your feedback:
Please send me an email to "hloeser" at the domain "de.ibm.com" with a small note about what you like in the blog.
  • Did it help you with some specific aspects of DB2, like migration from Oracle, XML processing, taming the beast...?
  • Are you reading this blog because grammar my sometimes funny it looks?
  • Do you like the articles labeled "fun"?
  • Did you read my now "dated" articles on April Fools Days?
  • Did you try to solve all the quizzes?
  • Did you come to my blog for the series on epilepsy?
  • Did you come here by mistake after an Internet search?
  • Anything else?
And what do you want to read in the future? Again, please celebrate with me and send a quick email with some feedback. If you want to stay anonymous, please leave a comment.