Showing posts with label json. Show all posts
Showing posts with label json. Show all posts

Tuesday, May 9, 2023

Decode JWTs in bash

Today, it is once again time for one of those "let's document it" blog posts. Lately, I looked into one of the IBM Cloud security features, trusted profiles based on compute resources. I described how to turn your container into a trusted identity. For developing code locally, I needed to copy over files from the Kubernetes pods to my local machine, then decode JWT access tokens, all using the command line. Here is what I did.

Tuesday, October 25, 2022

Things I learned at IDUG on JSON Web Token support in Db2

JSON Web Token support in Db2
Right now, I am at the IDUG 2022 EMEA Db2 Tech Conference in Edinburgh, Scotland. It's great to have an in-person event again, to network and to discuss. This year, there are many sessions covering Db2 security and they are well-attended. Database security seems to be (more) popular, now. In the past, I have blogged about JSON Web Tokens (JWT) and Db2. They allow a Single Sign-On (SSO) integration of Db2.

Tuesday, September 6, 2022

New IBM Cloud security features you should know

Custom role for operating Code Engine
Cybersecurity is always in a state of change. There are new security features and new attack methods - or was it the other way? Over the past months, several new security features were added to IBM Cloud. In this blog post, I want to quickly describe them and point you to IBM Cloud blog posts where I discuss them in greater detail.

Monday, March 1, 2021

JWT token authentication in Db2 runtimes like Python or Node.js

Python script connecting to Db2 with JWT
Some weeks ago I discussed how to configure JWT-based token authentication in Db2. I set up Db2 to accept JWS identity tokens and then connected to my test database using the command line. But how do you connect from a programming language like Python or Node.js? Here is what I needed.

Tuesday, January 12, 2021

Db2 Security: Configure JSON Web Token (JWT) authentication

Db2 login using JWT access token
Db2 login utilizing a JWT
Since V11.5.4, Db2 allows to consume JWT (JSON Web Token). The business case is to support SSO (single sign-on). You can configure Db2 to accept access tokens issued by external Identity Provider (IDPs), including the IBM solutions or open source projects like gluu. V11.5.5 brought some JWT improvements, e.g., support for multiple labels. On a recent Friday, I set up my Db2 Docker container to accept JWTs, not those issued by an external identity service, but my own.

Tuesday, April 7, 2020

Encode credentials and upload to Key Protect

Protect credentials in Key Protect
If you followed my blog or have used IBM Cloud, then you know that Key Protect (and Hyper Protect Crypto Services) is the solution to manage encryption keys on IBM Cloud. Did you know that you can import your own keys? And did you know that "key" can be (almost) anything? Today, I am going to show you how to manage a set of JSON-based credentials with Key Protect.

Thursday, November 30, 2017

IBM Cloud: Some fun with Python and Cloud Foundry Logging


IBM Cloud: Turn Log Data into Donut
Last month, after receiving user questions, I blogged about how to decipher Cloud Foundry log entries. Today, I want to point you to a small Cloud Foundry Python app I wrote. It helps to better understand Python and Cloud Foundry logging. You can also use it to test the IBM Cloud Log Analysis service which provides an easy-to-use interface to logs generated by applications running in the IBM Cloud. In the premium plans, external log events can also be fed into the service for consolidated storage and analysis.

As usual, the code for my app is available on Github: https://github.com/data-henrik/application-log-analysis/. Once deployed to IBM Cloud, the app can be used to send messages on a chosen log level back to the server. The server-side log level, i.e., the threshold for processed log messages can also be set. The app produces diagnostic output on "stdout" and "stderr". The two are treated differently by Cloud Foundry. Here is a screenshot of the logging app:
Test app for Cloud Foundry logging
The produced log entries can also be used to try out the IBM Cloud Log Analysis service. Diagnostic logs are automatically forwarded to the Log Search of that service. The messages are fed into Elasticsearch and can be analyzed using Kibana. I wrote some search queries (one shown below) and then built visualizations like the shown "Donut" based on those queries. I will write more about that in a future blog post.
Search Query for Elasticsearch / IBM Cloud Log Analysis

An official tutorial using that app and Log Analysis is available in the IBM Cloud docs.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Monday, November 17, 2014

A quick look at dashDB and a happy SQuirreL

dashDB slogan on its website
This morning I took some time to take a look at dashDB, a new IBM DWaaS (Data Warehouse as a Service) offering. When you go to the dashDB website, you are offered two choices: Use the dashDB service available on IBM Bluemix or use a Cloudant account to add a warehouse to your JSON database. Let me give you a brief overview of what you can do with dashDB and how I connected a local (open source) SQuirreL SQL client to my new dashDB database.


Cloudant Warehousing (dashDB)
dashDB is a cloud-based analytics database ("analytics in a dash")) with roots in Netezza and DB2 with BLU Acceleration. Data is stored in table (rows and columns) format. It is ready to connect all kinds analytic tools, local or cloud-based, and is already set up for geo-spatial data analysis (instructions on how to use the ESRI ArcGIS Desktop are provided). The best is that your regular SQL database/analytic tools continue to work, see below for details.

dashDB: schema discovery
I started my journey by logging into my existing Cloudant account. There, on the dashboard menu is a new item "Warehousing". When clicking on the "New Warehouse" button, you can select the Cloudant databases that you want to import into the warehouse. Because multiple databases can be associated with a Cloudant account or a Bluemix Cloudant service, this step let's you pick the data of choice. After the source data is chosen, the dashDB database is created and so-called schema discovery turns the JSON documents into rows of tables. Thereafter, the data is ready to have analytics applied. That is the time to launch the dashDB control center, another so-called "dashboard".

The welcome screen shows some of the analytic options, e.g., the database is ready to be used with either Cognos, SPSS, InfoSphere DataStage, R scripts, or all of them and more:
Analytis for dashDB: Cognos, SPSS, DataStage, R
SQuirrel SQL client - dashDB connected
Because some time ago I already tested and blogged about a predecessor of dashDB (see here: how to set it up and how to use R), I was more interested in trying out a JDBC-based client with my new cloud-based data warehouse. Included as part of the dashboard are several sections that help you with the application setup. So it was easy for me to obtain the JDBC URL and configure it and the listed userid/password in my local SQuirrel SQL client (it will work in IBM Data Studio and the Optim tool, too). As you can see from the screenshot, the database connection from my laptop to the cloud-based dashDB succeeded. Ready for some SQL.

My lessons learned from testing database queries on the converted data (JSON to relational) will be part of another blog entry. Stay tuned...

Monday, October 13, 2014

Node-RED: Simple "phoney" JSON entries in Cloudant

Node-RED Starter
on IBM Bluemix
Want to automatically store data about who called you on which phone number? Last Friday tried exactly that on IBM Bluemix and was amazed how simple it is, no real programming involved. All I needed was the Node-RED starter boilerplate (icon is on the right) on Bluemix and a new API of my mobile service provider.

The Node-RED boilerplate automatically creates a Node.js runtime environment on Bluemix and installs the Node-RED tool into it. In addition, a Cloudant JSON database is created. Once everything is deployed I opened the Node-RED tool in a Web browser. It offers a basic set of different input and output methods, processing nodes, and the ability to connect them in a flow graph. One of the input nodes is a listener for http requests. They help to react to Web service requests. I placed such http input node on the work sheet and labeled it "phone" (see screenshot).
Node-RED tool on Bluemix

How did I obtain information about callers and the numbers they called? Well, I tapped my friends at a security agency. I used a new free service offering at one of my phone service providers. It is called "sipgate.io" and allows to configure an URL/Web address that is accessed whenever someone calls one of the account's phone numbers (you could have multiple phone numbers). An http POST request is sent to the configured URL and the caller's phone number and the called number are included as payload. In my Node-RED application the "phone" node would answer to this request.

What I needed now was the data processing flow of the Web services request. On Friday I already tweeted the entire flow:



On the left we see "phone" node as http input. Connected to it is the "ok" node which sends an http response back, telling the phone company's Web services that we received the information. The other connected node is a "json" processor which translates the payload (who called which number) into a meaningful JSON object. That object is then moved on to the "calls" node, a Cloudant output node. All we needed was to select the Cloudant service on Bluemix and to configure the database name.
Cloudant Output Node, Node-RED on Bluemix
The magic itself was pressing the "Deploy" button on top of my Node-RED worksheet. It created the Node.js code for my Bluemix app. I was a little bit nervous about testing the app because I didn't code anything, just clicking. For my test I took the phone in my home office and dialed the number of my mobile phone number. The result? A new, nice and shiny, "phoney" JSON document in Cloudant:
"Phoney" record in Cloudant
Almost too easy. Great stuff, but unfortunately no code to share... :) (You can find the flow on Github)

Friday, July 25, 2014

The Hunt for the Chocolate Thief (Part 2) - Putting IBM Bluemix, Cloudant, and a Raspberry Pi to good use

I am still on the hunt for the mean chocolate thief, kind of. In the first part I covered the side of the Raspberry Pi and uploading data to Cloudant. I showed how to set up an infrared motion sensor and a webcam with the RPi, capture a snapshot and secure the image and related metadata in a Cloudant database on the IBM Bluemix Platform-as-a-service (PaaS) offering. In this part I am going to create a small reporting website with Python, hosted as a IBM Bluemix service.

Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:

 import os  
 from flask import Flask,redirect  
 import urllib  
 import datetime  
 import json  
 import couchdb  
   
   
 app = Flask(__name__)  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 #get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   with open("cloudant.json") as confFile:  
    couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
    couchServer = couchInfo["credentials"]["url"]  
    couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 @app.route('/')  
 def index():  
   # build up result page  
   page='<title>Incidents</title>'  
   page +='<h1>Security Incidents</h1>'  
   
   # Gather information from database about which city was requested how many times  
   page += '<h3>Requests so far</h3>'  
   # We use an already created view  
   for row in db.view('incidents/incidents'):  
     page += 'Time: <a href="/incident/'+str(row.key["id"])+'">'+str(row.key["ts"])+'</a><br/>'  
   
   # finish the page structure and return it  
   return page  
   
 @app.route('/incident/<id>')  
 def incident(id):  
   # build up result page  
   page='<title>Incident Detail</title>'  
   page +='<h1>Security Incident Details</h1>'  
   doc=db.get(id)  
   # Gather information from database about the incident  
   page += '<br/>Incident at date/time:'+str(doc["timestamp"])  
   page += '<br/>reported by "'+doc["creater"]+'" at location "'+doc["location"]+'"'  
   page += '<br/>Photo taken:<br/><img src="/image/'+id+'" />'  
   # finish the page structure and return it  
   return page  
   
 @app.route('/image/<id>')  
 def image(id):  
   #redirecting the request to Cloudant for now, but should be hidden in the future  
   return redirect(couchServer+'/officecam/'+id+'/cam.jpg')    
     
   
 port = os.getenv('VCAP_APP_PORT', '5000')  
 if __name__ == "__main__":  
      app.run(host='0.0.0.0', port=int(port))  


Overview of Security Incidents
The setup phase includes reading in access data for the Cloudant database server. Either that information is taken from a Bluemix environment variable or provided in a file "cloudant.json" (similar to what I did on the RPi). The main part of the script defines three routes, i.e., how to react to certain URL requests. The index page (index()) returns an overview of all recorded incidents, an incident detail page (incident(id)) fetches the data for a single event and embeds the stored webcam image into the generated page, and the last route (image(id)) redirects the request to Cloudant.

 Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.

Incident Detail: Hadoop involved?
function(doc) {
    if (doc.type == "oc")
       emit({"ts" : doc.timestamp, "id" : doc._id}, 1);

}
Then I access the timestamp information and generate the list as shown in the screenshot above.

The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.

Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
Zeppelin Brief seen at robbery

Have a nice weekend (and remember to sign up for a free Bluemix account)!




Wednesday, July 2, 2014

Nice Cloud, no rain: Using Cloudant/couchDB with Python on Bluemix

My last two blog entries were about getting started with Python on IBM Bluemix and how to use a custom domain with my Bluemix weather application. Today I am going to show how I added Cloudant and couchDB to my application, both locally and on Bluemix.

Storing the weather data locally doesn't make sense because I can query much more historical data on OpenWeatherMap. So I am going to use a database to log information about for which city and when the data was requested. That information, in aggregated form, could then be reported as fun fact to each user of the app. I chose Cloudant because it is simple to use, adequate for the intended purpose, has free usage plans on Bluemix, and I can use it and test locally as couchDB.

Add Cloudant as new service
The code itself is relatively simple and I put comments (shown at the end of the article). The interesting part is how to add a Cloudant service to my application on Bluemix, how to bind them in the application, and the preparation work for the database itself. So let's take a look at those steps.

Cloudant is offered as one of several services in the "Data Management" category on Bluemix. While on the Dashboard you simply click on the "Add a service" button as show on the right. Navigate to the Data Management section and choose Cloudant.

It will bring up a screen showing information about the service itself, on usage terms, and on the right side of it a dialog "Add Service" for adding the service to your account. Here you can already bind the new database service to your application by selecting an existing application from a dropdown list. I did that and gave my new Cloudant service the name "cloudantWeather" as shown:
Bind Cloudant to
your application
Once the service is added you can bring up the Cloudant administration interface. I have used Cloudant and couchDB before, so that isn't anything new. To avoid dealing with creation of a database as part of the actual program I decided to create a "weather" database through the administration interface for the hosted Cloudant and my local couchDB servers. An interesting but not too tricky part is how to access both servers depending on where the application is running. Information with the username, password, server address and other details is provided in an environment variable VCAP_SERVICES when run on Bluemix. Thus, in the program I am testing for the presence of that variable and then either retrieve the server information from it or access my local couchDB:

#get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   couchServer = "http://127.0.0.1:5984"  
   couch = couchdb.Server(couchServer) 


Storing new documents is simple and is shown in the full code listing. For the queries I am using the MapReduce feature of couchDB. In a "map" function I return the city name (and just the integer value 1), in the reduce function I am aggregating (summing up) the values by city. Both functions could be defined in the Python script and then passed into Cloudant as part of the query or predefined for more performance. I chose the latter one. So I created a so-called "secondary index" in my Cloudant database, it is called "view" in my couchDB. They are stored as part of a "design document" (shown is Cloudant):

Secondary index / permanent view

With that I finish my Python application, add some calls to the couchDB Python API (which I needed to add to the file "requirements.txt" as dependency) and test it locally. The final step is to deploy the application to Bluemix using the Cloud Foundry tool "cf push". Done, seems to work:
Bluemix weather app with Cloudant stats




Last but not least, here is the code I used for my little app:


 import os  
 from flask import Flask,redirect  
 import urllib  
 import datetime  
 import json  
 import couchdb  
   
 BASE_URL = "http://api.openweathermap.org/data/2.5/weather?q="  
 BASE_URL_fc ="http://api.openweathermap.org/data/2.5/forecast/daily?cnt=1&q="  
 app = Flask(__name__)  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 #get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   couchServer = "http://127.0.0.1:5984"  
   couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['weather']  
   
 @app.route('/')  
 def index():  
   return redirect('/weather/Friedrichshafen')  
   
 @app.route('/weather/<city>')  
 def weather(city):  
   # log city into couchDB/Cloudant  
   # basic doc structure  
   doc= { "type" : "city",  
     "c_by" : "bm",  
   }  
   # we store the city and the current timestamp  
   doc["city"]=city  
   doc["timestamp"]=str(datetime.datetime.utcnow())  
   # and store the document  
   db.save (doc)  
   
   # Time to grab the weather data and to create the resulting Web page  
   # build URIs and query current weather data and forecast  
   # JSON data needs to be converted  
   url = "%s/%s" % (BASE_URL, city)  
   wdata = json.load(urllib.urlopen(url))  
   url_fc = "%s/%s" % (BASE_URL_fc, city)  
   wdata_fc = json.load(urllib.urlopen(url_fc))  
   
   # build up result page  
   page='<title>current weather for '+wdata["name"]+'</title>'  
   page +='<h1>Current weather for '+wdata["name"]+' ('+wdata["sys"]["country"]+')</h1>'  
   page += '<br/>Min Temp. '+str(wdata["main"]["temp_min"]-273.15)  
   page += '<br/>Max Temp. '+str(wdata["main"]["temp_max"]-273.15)  
   page += '<br/>Current Temp. '+str(wdata["main"]["temp"]-273.15)+'<br/>'  
   page += '<br/>Weather: '+wdata["weather"][0]["description"]+'<br/>'  
   page += '<br/><br/>'  
   page += '<h2>Forecast</h2>'  
   page += 'Temperatures'  
   page += '<br/>Min: '+str(wdata_fc["list"][0]["temp"]["min"]-273.15)  
   page += '<br/>Max: '+str(wdata_fc["list"][0]["temp"]["max"]-273.15)  
   page += '<br/>Morning: '+str(wdata_fc["list"][0]["temp"]["morn"]-273.15)  
   page += '<br/>Evening: '+str(wdata_fc["list"][0]["temp"]["eve"]-273.15)  
   page += '<br/><br/>Weather: '+wdata_fc["list"][0]["weather"][0]["description"]  
   page += '<br/><br/>'  
   
   # Gather information from database about which city was requested how many times  
   page += '<h3>Requests so far</h3>'  
   # We use an already created view  
   for row in db.view('weatherQueries/cityCount',group=True):  
    page += row.key+': '+str(row.value)+'<br/>'  
   
   # finish the page structure and return it  
   page += '<br/><br/>Data by <a href="http://openweathermap.org/">OpenWeatherMap</a>'  
   return page  
   
 port = os.getenv('VCAP_APP_PORT', '5000')  
 if __name__ == "__main__":  
      app.run(host='0.0.0.0', port=int(port))  


Friday, March 28, 2014

XML or JSON: When to choose what for your database

There is a new article on IBM developerWorks titled "XML or JSON: Guidelines for what to choose for DB2 for z/OS". It has been written by two very experienced technologists with DB2 background. Though the comparison and the many code examples for how to use either XML or JSON are for DB2 for z/OS, most of it also applies to DB2 LUW. So I recommend reading the article regardless of which "camp" you are in.

At the end of the XML vs. JSON comparison is a resource section with a list of good papers and documentation. In my blog's page on DB2 & pureXML Resources you will also find additional material on that topic.

With that to read enjoy the weekend!

Thursday, June 27, 2013

(Updated) NoSQL and JSON with DB2

You may have heard that DB2 10.5, which recently became available, includes a technology preview "DB2 NoSQL JSON". JSON is the JavaScript Object Notation. The important stuff for JSON in DB2 can be found in the sqllib/json directory. Once I added the "lib" directory to the Java CLASSPATH, I could invoke a new command line processor "db2nosql" (found in "bin"):

hloeser@rotach:~/sqllib/json/bin$ sh db2nosql.sh
JSON Command Shell Setup and Launcher.
This batch script assumes your JRE is 1.5 and higher. 1.6 will mask your password.
Type db2nosql.sh -help to see options
Enter DB:jsontest


IBM DB2 NoSQL API 1.1.0.0 build 1.0.169
Licensed Materials - Property of IBM
(c) Copyright IBM Corp. 2013 All Rights Reserved.

Debug mode is off.
nosql>
nosql>Type your JSON query and end it with ;
nosql>Type help() or help for usage information
nosql>
nosql>Setup Tables and Functions seems to have not been created or have been created incorrectly. Please type enable(true) and enter to setup them up. You must have the correct admin privileges.
If you do not, type enable(false) and enter to see the SQL that will be used.


After entering the database name to use I got connected. Next was to enable the database for JSON processing as suggested above.

nosql>enable(true)
Executing SQL...
Database Artifacts created successfully


If you use "enable(false)" the tool will just print the DDL to execute manually. This is the creation of a system table SYSTOOLS.SYSJSON_INDEX and several functions. Next I set a JSON namespace. By default the user schema would be taken as namespace. Because I plan to look into storage details later on, I chose "TEST" (which will use the SQL schemaname "TEST" under covers).

 nosql>use test
Switched to schema TEST



Then I was good to go and insert two documents. Note that it is not necessary to create any table, collection, index or anything. Trying to find one of the documents I stored via predicate also worked.

nosql>db.obj.insert({name: "Henrik", country: "DE"})
OK
nosql>db.obj.insert({name: "Jim", country: "US"})
OK
nosql>db.obj.find({country: "US"})
nosql>Row 1:
nosql> {
nosql> "_id":{"$oid":"51c9acf301c690e828779af2"},
nosql> "name":"Jim",
nosql> "country":"US"
nosql> }
nosql>1 row returned in 134 milliseconds.


Some more commands are described in a recent developerWorks article on JSON in DB2. I also plan to write more about my adventures here in this blog.

Added 27.06.2013:
A couple more developerWorks articles are now available:
Finally, I have a video introducing the technology.