![]() |
Add fingerprint to FIDO key |
Showing posts with label Life. Show all posts
Showing posts with label Life. Show all posts
Wednesday, May 20, 2020
Use Chromium-based browsers to manage FIDO security keys
Monday, May 18, 2020
Some advanced SQL to analyze COVID-19 data
![]() |
Learn to write SQL |
Tuesday, April 14, 2020
Home office and rubber duck debugging, 5 levels
![]() |
Rubber duck debugging at home |
Wednesday, March 18, 2020
My best practices for home office - Corona edition
![]() |
Take some rest |
Friday, February 28, 2020
Swashbooking for crowd-sourced book reviews and fun
![]() |
Books for review |
Wednesday, October 2, 2019
Trip report: Sustainability management and reporting

Last Friday, I attended the annual conference of the Bodensee Innovation Cluster for digital change (changes due to digitalization). The conference had several interesting talks and included workshops. Let me give you a quick overview of the innovation cluster, then delve into the sustainability topic which was part of the conference.
Monday, February 25, 2019
Digital ethics, trusted AI and IBM
Last week I gave a talk followed by a discussion at a university. The presentation was about the current state of Artificial Intelligence (AI) and AI research topics. A good chunk of the discussion was dedicated to fairness, trust and digital ethics. In the following, I am sharing some of the related links.
IBM Research has a site dedicated to AI. On that, a section provides insight into topics on what they call Trusted AI. On the main IBM site is also a portal Trusted AI for Business, providing an introduction and overview for the non-research crowd. If you are interested and want to try out and learn about few problems hands-on, I recommend these links:
Finally, as a showcase of current AI capabilities, I recommend this video of IBM Project Debater and the live debate at Think 2019. A short video explains how Project Debater works:
If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.
IBM Research has a site dedicated to AI. On that, a section provides insight into topics on what they call Trusted AI. On the main IBM site is also a portal Trusted AI for Business, providing an introduction and overview for the non-research crowd. If you are interested and want to try out and learn about few problems hands-on, I recommend these links:
- AI Fairness 360 Open Source Toolkit: http://aif360.mybluemix.net/
- Detect the bias - a game and survey: http://biasreduction.mybluemix.net/
- Old, but still great: MIT Moral Machine: http://moralmachine.mit.edu/
Finally, as a showcase of current AI capabilities, I recommend this video of IBM Project Debater and the live debate at Think 2019. A short video explains how Project Debater works:
If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.
Friday, February 8, 2019
Startup lessons from a Fuckup Night
Last Wednesday, I attended the Fuckup Night Friedrichshafen Vol. II. If you don't know, Fuckup Nights is a global movement and event series dedicated to professional failures. That is, usually founders of failed startups tell their stories. Typically, it is a mix of funny adventures into the world of business, some sad parts and most importantly some lessons learned. So what were the lessons I took away? Read on...
Friday, November 16, 2018
Incorporate Git and IBM Cloud information into BASH command prompt
Saturday, October 6, 2018
Impressions from Zeppelin flight
![]() |
Zeppelin flight |
Thursday, February 23, 2017
Location and Intent Matter: Data Privacy vs. US Government
![]() |
Some data is locked away |
Thursday, November 24, 2016
Stuff - The Day of the BLOB and Object Storage
Regardless of whether it is turkey, cranberry sauce, stuffing, gravy, sweet potatoe pie, mashed potatoes or more that you eat, independent of whether it is a new iPhone, tablet, big screen, Bluetooth soundbar, household robot or other gadget on sale, good to know that you can stuff almost anything into a DB2 BLOB or into the Bluemix Object Storage or Block Storage service.
In that sense "Happy Thanksgiving"! I am currently looking into the Content Delivery Network service to get my stuff faster to my folks. Talking about "stuff", enjoy this classic on "stuff" and "storage":
In that sense "Happy Thanksgiving"! I am currently looking into the Content Delivery Network service to get my stuff faster to my folks. Talking about "stuff", enjoy this classic on "stuff" and "storage":
Wednesday, January 20, 2016
The Cloud, Mood-Enhancing Substances, World Economic Forum, and More
![]() |
DataWorks and Connect & Compose |
Tuesday, October 7, 2014
Starvation: Electronic books, DRM, the local library, and database locks
Over the past days I ran into an interesting database problem. It boils down to resource management and database locks. One of my sons is an avid reader and thus we have an ongoing flow of hardcopy and electronic books, most of them provided by the local public library (THANK YOU!). Recently, my son used the electronic library to place a reservation on a hard-to-get ebook. Yesterday, he received the email that the book was available exclusively to him (intention lock) and to be checked out within 48 hours (placing the exclusive lock). And so my problems began...
There is a hard limit on the maximum number of checked out ebooks per account. All electronic books are lent for 14 days without a way to return them earlier because of Digital Rights Management (DRM). If the account is maxed out, lending a reserved book does not work. Pure (teenage) frustration. However, there is an exclusive lock on the book copy and nobody else can lend it either, making the book harder to get and (seemingly) even more popular. As consequence more reservation requests are placed, making the book even harder to lend. In database theory this is called starvation effect or resource starvation. My advise of "read something else" is not considered a solution.
How could this software problem be solved? A change to DRM to allow earlier returns seems to be too complex. As there is also a low limit for open reservation requests per account, temporarily bumping up the number of books that can be lent per account would both solve the starvation effect and enhance the usability. It would even increase the throughput (average books out to readers), would reduce lock waits (trying to read a certain book), and customer feedback.
BTW: The locklist configuration in DB2 (similar to the number of books lent per account) is adapted automatically by the Self Tuning Memory Manager (STMM), for easy of use, for great user/customer feedback.
Trouble lending an ebook |
There is a hard limit on the maximum number of checked out ebooks per account. All electronic books are lent for 14 days without a way to return them earlier because of Digital Rights Management (DRM). If the account is maxed out, lending a reserved book does not work. Pure (teenage) frustration. However, there is an exclusive lock on the book copy and nobody else can lend it either, making the book harder to get and (seemingly) even more popular. As consequence more reservation requests are placed, making the book even harder to lend. In database theory this is called starvation effect or resource starvation. My advise of "read something else" is not considered a solution.
How could this software problem be solved? A change to DRM to allow earlier returns seems to be too complex. As there is also a low limit for open reservation requests per account, temporarily bumping up the number of books that can be lent per account would both solve the starvation effect and enhance the usability. It would even increase the throughput (average books out to readers), would reduce lock waits (trying to read a certain book), and customer feedback.
BTW: The locklist configuration in DB2 (similar to the number of books lent per account) is adapted automatically by the Self Tuning Memory Manager (STMM), for easy of use, for great user/customer feedback.
Friday, July 25, 2014
The Hunt for the Chocolate Thief (Part 2) - Putting IBM Bluemix, Cloudant, and a Raspberry Pi to good use
I am still on the hunt for the mean chocolate thief, kind of. In the first part I covered the side of the Raspberry Pi and uploading data to Cloudant. I showed how to set up an infrared motion sensor and a webcam with the RPi, capture a snapshot and secure the image and related metadata in a Cloudant database on the IBM Bluemix Platform-as-a-service (PaaS) offering. In this part I am going to create a small reporting website with Python, hosted as a IBM Bluemix service.
Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:
The setup phase includes reading in access data for the Cloudant database server. Either that information is taken from a Bluemix environment variable or provided in a file "cloudant.json" (similar to what I did on the RPi). The main part of the script defines three routes, i.e., how to react to certain URL requests. The index page (index()) returns an overview of all recorded incidents, an incident detail page (incident(id)) fetches the data for a single event and embeds the stored webcam image into the generated page, and the last route (image(id)) redirects the request to Cloudant.
Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.
function(doc) {
if (doc.type == "oc")
emit({"ts" : doc.timestamp, "id" : doc._id}, 1);
}
Then I access the timestamp information and generate the list as shown in the screenshot above.
The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.
Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
Have a nice weekend (and remember to sign up for a free Bluemix account)!
Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:
import os
from flask import Flask,redirect
import urllib
import datetime
import json
import couchdb
app = Flask(__name__)
# couchDB/Cloudant-related global variables
couchInfo=''
couchServer=''
couch=''
#get service information if on Bluemix
if 'VCAP_SERVICES' in os.environ:
couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]
couchServer = couchInfo["credentials"]["url"]
couch = couchdb.Server(couchServer)
#we are local
else:
with open("cloudant.json") as confFile:
couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]
couchServer = couchInfo["credentials"]["url"]
couch = couchdb.Server(couchServer)
# access the database which was created separately
db = couch['officecam']
@app.route('/')
def index():
# build up result page
page='<title>Incidents</title>'
page +='<h1>Security Incidents</h1>'
# Gather information from database about which city was requested how many times
page += '<h3>Requests so far</h3>'
# We use an already created view
for row in db.view('incidents/incidents'):
page += 'Time: <a href="/incident/'+str(row.key["id"])+'">'+str(row.key["ts"])+'</a><br/>'
# finish the page structure and return it
return page
@app.route('/incident/<id>')
def incident(id):
# build up result page
page='<title>Incident Detail</title>'
page +='<h1>Security Incident Details</h1>'
doc=db.get(id)
# Gather information from database about the incident
page += '<br/>Incident at date/time:'+str(doc["timestamp"])
page += '<br/>reported by "'+doc["creater"]+'" at location "'+doc["location"]+'"'
page += '<br/>Photo taken:<br/><img src="/image/'+id+'" />'
# finish the page structure and return it
return page
@app.route('/image/<id>')
def image(id):
#redirecting the request to Cloudant for now, but should be hidden in the future
return redirect(couchServer+'/officecam/'+id+'/cam.jpg')
port = os.getenv('VCAP_APP_PORT', '5000')
if __name__ == "__main__":
app.run(host='0.0.0.0', port=int(port))
![]() |
Overview of Security Incidents |
Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.
![]() |
Incident Detail: Hadoop involved? |
if (doc.type == "oc")
emit({"ts" : doc.timestamp, "id" : doc._id}, 1);
}
The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.
Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
![]() |
Zeppelin Brief seen at robbery |
Have a nice weekend (and remember to sign up for a free Bluemix account)!
Catching the mean chocolate thief with Raspberry Pi, Bluemix, and Cloudant
I always try to have some chocolate in my office, kind of as mood enhancer. But how to be sure that nobody else is going to plunder and pilfer my hidden treasures? So it was great that last week at the Developer Week conference in Nuremberg I got my hands on a Raspberry Pi (thank you, Franzis Verlag and Christian Immler) and that I know a little about IBM Bluemix. And here is the plan: Hook up my IBM-sponsored webcam to the RPi and then take, activated by a motion-sensor, a snapshot and upload the picture and metadata to a Cloudant NoSQL database. With a Bluemix-based application I could then have worldwide access to the "incident data" and catch the mean chocolate thief...
![]() |
Raspberry Pi, motion sensor, and webcam |
Next I logged into IBM Bluemix, the platform-as-a-service (PaaS) offering for developers and created a Cloudant data store. This is done similar to how I described it in my previous article on using Cloudant for some statistics for a weather webpage. The account data for the Cloudant database can be obtained in JSON format. I copied that information into a file "cloudant.json" and placed it into my project directory on the Raspberry Pi. With that, we are already at the software part of this project.
In the following, you see the Python script I used for the prototyping. It is performing some setup work which includes reading in the access information for the Cloudant account. The main part is a simple loop waiting for the thief to appear, i.e., the motion sensor to be actived:
import datetime
import time
import subprocess
import RPi.GPIO as io
import json
import couchdb
io.setmode(io.BCM)
pir_pin = 18
scriptPath='/home/pi/projects/officeCam/takeSnap.sh'
imgFile='/home/pi/projects/officeCam/office.jpg'
# couchDB/Cloudant-related global variables
couchInfo=''
couchServer=''
couch=''
with open("cloudant.json") as confFile:
couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]
couchServer=couchInfo["credentials"]["url"]
couch = couchdb.Server(couchServer)
# access the database which was created separately
db = couch['officecam']
io.setup(pir_pin, io.IN) # activate input
while True:
if io.input(pir_pin):
subprocess.call([scriptPath])
f=open(imgFile,'r')
# basic doc structure
doc= { "type" : "oc",
"creater" : "RPi",
"location" : "office",
"city" : "Friedrichshafen"
}
doc["timestamp"]=str(datetime.datetime.utcnow())
# and store the document
db.save (doc)
db.put_attachment(doc,f,filename='cam.jpg')
f.close()
print("Alarm processed")
time.sleep(1)
Once some motion has been detected, the Python script invokes a shell script. It is printed below. The only action is to execute the fswebcam program which takes a snapshot with the webcam. Thereafter, back in Python, I create a JSON document, stuff the current timestamp and some other information into it and store it to the Cloud-based NoSQL database. As last step I attach the picture to that document, so that even if the mean chocolate thief notices the trap, the image is secured in the cloud.
#!/bin/sh
fswebcam -q -c /home/pi/projects/officeCam/fswebcam.conf
With that I am done with the Raspberry Pi. What is left is to work on the reporting. See how it is done in Python on Bluemix and Cloudant.
Tuesday, June 24, 2014
Why we need and have workload management
Wikipedia |
In a database system like DB2 there is also a built-in Workload Management. If you are using BLU Acceleration, it is activated by default and some rules have been defined, else it is switched off. Why turn it on and use it? Same reasons as in real life:
- A "fair" allocation of time and resources between different work items/applications is needed ("work / life balancing"?).
- Response time for critical tasks or some type of work is important and needs to be protected against less important tasks ("your mother-in-law visits, take care of her").
- Implementation of rules to control and regulate the system behavior ("kids in bed by 8pm means time for you to watch soccer").
- Deal with rogue queries that threaten regular operations ("kids bring over the entire neighborhood").
- The system (sometimes) is overloaded and you have to set priorities ("no party this weekend").
Does Workload Management help? Yes, it does. However, similar to family life it is possible that because of resource shortage not all planned tasks can be performed. Maybe time for an upgrade ("hire some help, do not get more kids... :)").
I plan to discuss DB2 WLM details in future articles, workload permitting...
Labels:
administration,
best practices,
data in action,
DB2,
IT,
knowledge center,
Life,
version 10.5,
workload
Monday, March 17, 2014
From Lake Constance with love: A new Goodyear Blimp
A new Zeppelin (Zeppelin NT with NT as in "New Technology"), the next generation of Goodyear Blimps, it scheduled for its first flight today. It is the first of three. The components have been built in my current home town Friedrichshafen and been shipped to Goodyear to Ohio. There, the semi-rigid airship has been assembled.
BTW: Zeppelin flights in Germany can be booked at Zeppelinflug and you can learn more about the Zeppelin history at the Zeppelin Museum in Friedrichshafen.
Saturday, February 8, 2014
Family life and DB2 BLU
Imagine that you had to search for a cooking pot within your house. Where would you start and search first? Most people would focus on the kitchen. Where would you look for some toothpaste? Most probably in the bathroom and maybe in the room where you just put the bag from your shopping trip.
Using the context information speeds up search, you are only considering some places and avoid searching the entire house. This is data skipping in normal life. DB2 with BLU Acceleration uses a synopsis table to provide the context information. By avoiding work less resources are needed, less data needs to be processed and you have the result much faster.
Now imagine that the cabinets are labeled, the kids would have cleaned up their room with clothes nicely folded and small junk sorted into plastic containers. In DB2 BLU this would be called "scan-friendly". Some people use "space bags", plastic wraps that can be vacuumed to reduce the storage size of clothes, pillows, etc. Because you can still see what is inside and handle it like everything else, it is "actionable compression" - same as in DB2 BLU which can operate on compressed data.
Now if I could create an analogy how DB2 BLU does the dishes - something I have to do now. Household chores. Enjoy the weekend!
Using the context information speeds up search, you are only considering some places and avoid searching the entire house. This is data skipping in normal life. DB2 with BLU Acceleration uses a synopsis table to provide the context information. By avoiding work less resources are needed, less data needs to be processed and you have the result much faster.
Now imagine that the cabinets are labeled, the kids would have cleaned up their room with clothes nicely folded and small junk sorted into plastic containers. In DB2 BLU this would be called "scan-friendly". Some people use "space bags", plastic wraps that can be vacuumed to reduce the storage size of clothes, pillows, etc. Because you can still see what is inside and handle it like everything else, it is "actionable compression" - same as in DB2 BLU which can operate on compressed data.
Now if I could create an analogy how DB2 BLU does the dishes - something I have to do now. Household chores. Enjoy the weekend!
Thursday, January 9, 2014
!!!STOP!!! Birthday Party for 5 Years of Blogging (Your participation needed)
Five years ago, on January 9th 2009, I started this blog. Time to look back and to
celebrate. But also time to look forward. And I need your help with both. Please continue reading, 5 minutes are needed.
In late 2008 I was looking for an easy way to share tips&tricks about DB2. Over the holidays I thought about trying out "blogging" and started it in January 2009. And now I can't believe that 5 years passed already. Time to celebrate: Some extra chocolate for me today and a big THANK YOU to you for reading what I write.
As part of the celebration I am looking for some gifts, i.e. your feedback:
Please send me an email to "hloeser" at the domain "de.ibm.com" with a small note about what you like in the blog.
by John Hritz, CC-BY-2.0 |
In late 2008 I was looking for an easy way to share tips&tricks about DB2. Over the holidays I thought about trying out "blogging" and started it in January 2009. And now I can't believe that 5 years passed already. Time to celebrate: Some extra chocolate for me today and a big THANK YOU to you for reading what I write.
As part of the celebration I am looking for some gifts, i.e. your feedback:
Please send me an email to "hloeser" at the domain "de.ibm.com" with a small note about what you like in the blog.
- Did it help you with some specific aspects of DB2, like migration from Oracle, XML processing, taming the beast...?
- Are you reading this blog because grammar my sometimes funny it looks?
- Do you like the articles labeled "fun"?
- Did you read my now "dated" articles on April Fools Days?
- Did you try to solve all the quizzes?
- Did you come to my blog for the series on epilepsy?
- Did you come here by mistake after an Internet search?
- Anything else?
Subscribe to:
Posts (Atom)