![]() |
ER diagram for cloud security data |
Showing posts with label github. Show all posts
Showing posts with label github. Show all posts
Friday, March 24, 2023
Analyze your IBM Cloud access management setup
Wednesday, March 16, 2022
From Bluemix to IBM Cloud, from Cloud Foundry to Code Engine
"Bring Your Own Community" |
Thursday, March 11, 2021
Cloud tutorial on serveless web app and eventing
A follow-up from my last post on Python decorators:
![]() |
Solution architecture |
Today, that same solution scenario and app are still available, but they are served by IBM Cloud Code Engine. Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions or batch jobs. The slightly renamed tutorial — "Serverless web app and eventing for data retrieval and analytics" — demonstrates how the existing app can be containerized and both served as web app and and used to process the daily data collection event.
Friday, March 5, 2021
Pseudo-decorators for my Python Flask app
![]() |
Secured Python Flask app on Code Engine |
Tuesday, January 12, 2021
Db2 Security: Configure JSON Web Token (JWT) authentication
![]() |
Db2 login utilizing a JWT |
Monday, December 28, 2020
OBS on Linux: Green screen and virtual camera for video conferencing
![]() |
OBS Studio: My monkey enjoys the beach |
Monday, March 2, 2020
Extend IBM Cloud Security Advisor with your own security metrics
![]() |
Custom findings in Security Advisor |
Friday, November 16, 2018
Incorporate Git and IBM Cloud information into BASH command prompt
Monday, October 22, 2018
Automated reports with IBM Cloud Functions, Db2 and Slack
![]() |
GitHub Traffic Analytics |
Wednesday, July 18, 2018
Now on GitHub: Understand and build chatbots the easy way
Recently, I posted about a then upcoming Meetup and my talk about chatbots. Here is a quick follow-up. To compile stuff for that presentation and some other upcoming talks, I created a GitHub repository "chatbot-talk2018". I has lots of links to get started and to deepen understanding around chatbot technology. Moreover, it contains a presentation in Markdown for GitPitch for you to use and extend. And finally, I wrote this brief introduction to some chatbot terms or concepts:
- Intents are what the user aims for, the desired action or result of the interaction. An intent can be to retrieve a weather report.
- Entities are (real or virtual) subjects or objects. For the example of the weather report, entities can be the city or country, e.g., Friedrichshafen in Germany, or date and time information such as "today afternoon".
- A dialog, dialog flow or dialog tree
is used to structure the interaction. Typically, an interaction lasts
longer than the user providing input and the chatbot returning a single
answer. A dialog can be highly complex with several levels, subbranches,
(directed) links between dialog nodes and more.
For a weather chatbot, a dialog could be constructed that, after a greeting, asks the user about the location and time for a weather report, then asks if additional information, such as a weather outlook for the next few days, is needed. - Slots are supported by several chatbot systems. Slots are used to specify the data items that need to be specified in order to produce the result of an intent. To return a weather report, e.g., at least the location and maybe the date or time is needed.
- Context is state information that is carried from step to step for a specific user interaction. The context typically stores the information that is already gathered as input (see "slot"), result-related data or metadata, or general chat information, e.g., the user name.
Monday, July 16, 2018
Extended: Manage and interact with Watson Assistant from the command line
Remember my blog posts about how to manage Watson Assistant from the command line and how to test context for a conversation? Well, that tool did not work well for server actions which I used in this tutorial on building database-driven Slackbot. The good news is that I found time to extend my command line Watson Conversation Tool to support credentials for IBM Cloud Functions.
With the recent update to the tool there are two new features:
With the recent update to the tool there are two new features:
- Use the option "-outputonly" with the "-dialog" option to only print the output text, not the entire JSON response object. I introduced it to be able to demo dialog flows from the command line. Not everybody needs all the metadata for every dialog turn. Here is how it looks like when in action:
- In order to test dialog server actions, I need to provide the credentials for IBM Cloud Functions (ICF) in a private context variable. I recently blogged about how to enable the Watson botkit middleware for those server actions. For my tool, just provide the ICF key token as part of the configuration file. A sample is part of the GitHub repository.
![]() |
Chatbot dialog on the command line |
Tuesday, June 26, 2018
Enable Botkit Middleware for Watson Assistant for serverless actions
![]() |
Slack chatbot with Watson Assistant |
Tuesday, April 24, 2018
Automated, regular database jobs with IBM Cloud Functions (and Db2)
![]() |
IBM Cloud Functions and Db2 |
Monday, April 23, 2018
Use Db2 and IBM Cloud to analyze GitHub traffic data (tutorial)
![]() |
Architecture: GitHub Traffic Analytics |
Friday, June 2, 2017
EgoBot: Fun with a Slightly Mutating ChatBot
![]() |
Fun with the Bluemix EgoBot |
The EgoBot is at an early stage right now. It supports queries about some of its metadata and adding new intents. And it has both an English and a German version (does language change its character...?). You can see a sample session below.
![]() |
Chatting with the Bluemix EgoBot |
Tuesday, March 28, 2017
Chatbots: Manage Your Watson Conversations from the Command Line or App
![]() |
Manage Watson Conversation Workspaces |
Monday, February 20, 2017
Write Your Own CLI Plugins for Bluemix Cloud Foundry
![]() |
README for my Plugin |
Wednesday, January 18, 2017
Context Path Routing of Apps and Services in Bluemix
![]() |
Context Paths for Bluemix Apps |
Cloud Foundry introduced Context Path Routing last year. Until then there was the requirement that each app (or service) was served from its own hostname. Now, apps can share a host with each app being served from a specific path on that host. Here are two examples:
- When building a larger website, there could be several so-called microsites embedded. With Context Path Routing it is possible to serve, e.g., example.com from one web app and example.com/user-management or example.com/news from other apps. All these apps could be written in different programming languages such as Node.js, Python, Java and others.
- For a more complex microservice-based app, following the principles of the Twelve Factor App, there could be several (backing) services involved. The app and each would require their own hostname. With Context Path Routing the app could use app.mybluemix.net and services could be served from app.mybluemix.net/service1, app.mybluemix.net/service2, etc.
Labels:
administration,
bluemix,
cloud,
cloudfoundry,
github,
ibmcloud,
IT,
virtualization
Friday, March 11, 2016
Coincidence? CeBIT visitors and weather featuring Jupyter Notebooks, Spark and dashDB
![]() |
Jupyter Notebook via Bluemix |
(Note that I am in a hurry and don't have time for detailed steps today, but that I share the sources and will add steps later on.)
The screenshot on the right is the result of what I am going to produce today. The source file for the notebook, the exported HTML file, input data, etc. can be found in this GitHub repository. If you came here for DB2 or dashDB you might wonder what Jupyter Notebooks are. Notebooks are interactive web-pages where you have sections ("cells") that contain text or code. The text can be in different input formats including Markdown. The code cells support various programming languages, can be edited inline and are executed on demand. Basically a notebook is an interactive, on-demand business/database report. And as you can see in the screenshot, the code is able to produce graphs.
The IBM Analytics for Apache Spark service on Bluemix provides those analytic notebooks and it is the service I provisioned for my tests. Once you launch the service you can start off with sample notebooks or create them from scratch. I started with samples to get up to speed and the composed my own one (see my notebook source on GitHub). It has several cells written in Python to set up a connection to dashDB/DB2, execute queries, fetch data and process that data within the notebook. The data is used to plot out a couple graphs.
For my example I am using a dashDB (a DB2-based service) that I provisioned on Bluemix as a data store. I used the LOAD wizard to create and fill one table holding historic CeBIT dates and visitor counts and another table with historic weather data for Hanover, Germany (obtained from Deutscher Wetterdienst). Within the notebook those tables are queried and the data fetched into so-called data frames. The data frames are used to transform and shape the data as needed and as source for the generated graphs. Within the notebook it is possible to combine data frames, execute queries on them and more - something I didn't do today.
To get to my dashDB-based graphs in a Jupyter Notebook on IBM Analytics for Apache Spark I needed to get around some issues I ran into, including data type casts, naming of result columns, labeling of graphs, sourcing columns as input for a graph and more. For time reason I refer to the comments in the source code for my notebook.
After all that introduction, here is the resulting graph. It shows that during a sunny and warm week with close to no rain there were fewer CeBIT attendees. A little rain, some sun and average temperature yielded a high visitor count. So could it be that the weather to attendee relationship is bogus for computer fairs and may only hold for museums? Anyway, it was fun learing Jupyter Notebooks on Bluemix. Now I need to plot my weekend plans...
![]() |
Historic CeBIT Weather and Attendance |
Monday, August 18, 2014
Accessing DB2 from node.js on IBM Bluemix and locally
Some days ago I started experimenting with node.js. Other than JSON and some click functions on webpages I don't have much experience with JavaScript. The reason for this "adventure" is that node.js is offered as one of several programming languages on IBM Bluemix, IBM's platform-as-a-service (PaaS). I wanted to find out how complex or easy it would be to bring both node.js and DB2 (IBM's relational and in-memory database system) together.
When I start with some new language I typically produce errors. Thus I wanted to avoid pushing my app to Bluemix all the time, but instead wanted to test it locally first. Hence I downloaded and installed a local node.js environment, including the so-called node.js package manager "npm" first. npm allows you to install additional modules/code libraries. They are placed into the directory "node_modules". Within the program (or script), the modules are included and referenced via the "require" statement:
var express = require('express');
The above binds the "ExpressJS Web Application Framework for node". That framework is part of the node.js starter application on IBM Bluemix. With that basic application which is offered for download you can easily test whether the local installation work ok:
node app.js
The command which I executed in a regular shell launches the node.js runtime with the sample application. Based on the configuration it provides a small web application available on my laptop on port 3000. Accessing "http://127.0.0.1:3000" in my web browser shows the demo page. All ok.
To combine node.js and DB2 I require the DB2 database driver:
var ibmdb = require('ibm_db');
Just running the application again would return an error because the module has not been installed. Hence my next step in the command shell is:
npm install ibm_db
This invokes the node package manager and instructs it to download and install the IBM database client driver and related node.js API. Waiting for a minute it returned an error because it couldn't find the file "sqlcli1.h". This is an indicator that my local DB2 was missing the application development environment. Running "db2setup" again (as root), selecting "work with existing" and marking the application development package for installation solved the issue. After db2setup finished, I ran "npm install ibm_db" again and it was able to download, build and install that module.
To test my small app both locally and on Bluemix, I needed to obtain user and DB2 instance information for either the local environment or the Bluemix SQLDB service (DB2). This is done with the following code snippet (not that beauty as I just started...):
In the code I first search for the object with the Bluemix environment information. If it is not found the code assumes it is a local invocation. In that case the DB2 access information is loaded from the file "db2cred.json". It is a file I created in the application directory with a content like here:
{
"hostname": "127.0.0.1",
"host": "127.0.0.1",
"port": 50000,
"username": "hloeser",
"password": "mytopsecretpassword",
"db": "CLOUDDB"
}
The code uses the information about the hostname, port, and the user/password combination to create a connection string. That information together with the IBM Database Driver interface can be passed to a request handler in the node.js/Express runtime infrastructure (the "app.get()" call).
My small test application runs successfully both on my laptop as well as on IBM Bluemix. I plan to write more about it over the next days and to upload the code to my Github account. Bluemix-related posts can be accessed by this link.
Update: The follow-up article has been published here, showing geo IP lookup and logging into DB2.
When I start with some new language I typically produce errors. Thus I wanted to avoid pushing my app to Bluemix all the time, but instead wanted to test it locally first. Hence I downloaded and installed a local node.js environment, including the so-called node.js package manager "npm" first. npm allows you to install additional modules/code libraries. They are placed into the directory "node_modules". Within the program (or script), the modules are included and referenced via the "require" statement:
var express = require('express');
The above binds the "ExpressJS Web Application Framework for node". That framework is part of the node.js starter application on IBM Bluemix. With that basic application which is offered for download you can easily test whether the local installation work ok:
node app.js
The command which I executed in a regular shell launches the node.js runtime with the sample application. Based on the configuration it provides a small web application available on my laptop on port 3000. Accessing "http://127.0.0.1:3000" in my web browser shows the demo page. All ok.
To combine node.js and DB2 I require the DB2 database driver:
var ibmdb = require('ibm_db');
Just running the application again would return an error because the module has not been installed. Hence my next step in the command shell is:
npm install ibm_db
This invokes the node package manager and instructs it to download and install the IBM database client driver and related node.js API. Waiting for a minute it returned an error because it couldn't find the file "sqlcli1.h". This is an indicator that my local DB2 was missing the application development environment. Running "db2setup" again (as root), selecting "work with existing" and marking the application development package for installation solved the issue. After db2setup finished, I ran "npm install ibm_db" again and it was able to download, build and install that module.
To test my small app both locally and on Bluemix, I needed to obtain user and DB2 instance information for either the local environment or the Bluemix SQLDB service (DB2). This is done with the following code snippet (not that beauty as I just started...):
1: // get DB2 SQLDB service information
2: function findKey(obj,lookup) {
3: for (var i in obj) {
4: if (typeof(obj[i])==="object") {
5: if (i.toUpperCase().indexOf(lookup) > -1) {
6: // Found the key
7: return i;
8: }
9: findKey(obj[i],lookup);
10: }
11: }
12: return -1;
13: }
14: var env = null;
15: var key = -1;
16: var db2creds=null;
17: if (process.env.VCAP_SERVICES) {
18: env = JSON.parse(process.env.VCAP_SERVICES);
19: key = findKey(env,'SQLDB');
20: }
21: if (!env) {
22: console.log("We are local");
23: var file = __dirname + '/db2cred.json';
24: try {
25: db2creds = require(file);
26: } catch(err) {
27: return {};
28: }
29: // db2creds = JSON.parse(fileJSON);
30: console.log(db2creds);
31: } else {
32: var db2creds = env[key][0].credentials;
33:
34: }
35: var connString = "DRIVER={DB2};DATABASE=" + db2creds.db + ";UID=" + db2creds.username + ";PWD=" + db2creds.password + ";HOSTNAME=" + db2creds.hostname + ";port=" + db2creds.port;
36:
37: app.get('/db2', routes.db2test(ibmdb,connString));
38:
39:
In the code I first search for the object with the Bluemix environment information. If it is not found the code assumes it is a local invocation. In that case the DB2 access information is loaded from the file "db2cred.json". It is a file I created in the application directory with a content like here:
![]() |
Logo for my DB2 node.js app |
{
"hostname": "127.0.0.1",
"host": "127.0.0.1",
"port": 50000,
"username": "hloeser",
"password": "mytopsecretpassword",
"db": "CLOUDDB"
}
The code uses the information about the hostname, port, and the user/password combination to create a connection string. That information together with the IBM Database Driver interface can be passed to a request handler in the node.js/Express runtime infrastructure (the "app.get()" call).
My small test application runs successfully both on my laptop as well as on IBM Bluemix. I plan to write more about it over the next days and to upload the code to my Github account. Bluemix-related posts can be accessed by this link.
Update: The follow-up article has been published here, showing geo IP lookup and logging into DB2.
Subscribe to:
Posts (Atom)