![]() |
Reclaim resources |
Showing posts with label system management. Show all posts
Showing posts with label system management. Show all posts
Wednesday, October 13, 2021
IBM Cloud resource reclamations: Some more details and my best practices
Wednesday, September 22, 2021
To restore or to delete: Working with IBM Cloud resource reclamation
![]() |
Resource management |
In this blog post, I am going to give a brief overview of resource reclamation in IBM Cloud and share some tips and tricks from my experience.
Wednesday, July 28, 2021
Password expiration and vacation planning
![]() |
Ready for vacation: Passwords |
Thursday, April 2, 2015
db2audit & syslog: Who Stole my Chocolate Easter Eggs?
Security Audit |
When the DB2 10.5 Cancun Release (Fixpack 4) was announced I mentioned that db2audit records can be transferred to syslog now and I wanted to test it. The command db2audit is used to configure parts of the DB2 audit infrastructure, to archive audit logs, and to extract information from the archived logs. The "extract" option now features a destination "syslog" (from the command syntax):
Audit Extraction .-file--output-file---------------------------------------------------. |--+---------------------------------------------------------------------+--> +-delasc--+---------------------------+--+-----------------+----------+ | '-delimiter--load-delimiter-' '-to--delasc-path-' | '-syslog--facility.priority--+-----------+--+-----------------------+-' '-tag--word-' '-splitrecordsize--byte-'
While the option "file" would store the formatted audit logs in a regular text file, choosing "delasc" would split the log data across several delimited text files, ready for postprocessing in the database. The new option "syslog" can be used to hand over the audit data to the system logger facility. Depending which logger is used and how it is set up it could mean storing the audit records in local message files or sending them over to a central hub for analysis (e.g., by IBM Operations Analytics or Splunk).
DB2 Setup
In order to find the one trying to steal the Easter eggs the audit system would need to be active prior to any attempt. The DB2 audit infrastructure is started with "db2audit start", basic settings can be changed with "db2audit configure". For my tests I left everything set to failure-only logging and changed the archive path to "/tmp". Using the "describe" option, here is how the configuration looked like:
[hloeser@mymachine ~]$ db2audit describe
DB2 AUDIT SETTINGS:
Audit active: "TRUE "
Log audit events: "FAILURE"
Log checking events: "FAILURE"
Log object maintenance events: "FAILURE"
Log security maintenance events: "FAILURE"
Log system administrator events: "FAILURE"
Log validate events: "FAILURE"
Log context events: "FAILURE"
Return SQLCA on audit error: "FALSE "
Audit Data Path: ""
Audit Archive Path: "/tmp/"
It is also a good idea to use a buffer to hold audit records. The audit_buf_sz controls its size:
db2 update dbm cfg using audit_buf_sz 40
The next step in my setup was to create an audit policy in my test database:
create audit policy execfail categories execute status failure,checking status failure, context status failure error type normal
Creating a policy does not mean it is used. The AUDIT statement takes care of it:
audit sysadm,dbadm,dataaccess,user hloeser using policy execfail
Syslog Setup
The above concludes the DB2 portion of the test setup. Next is the optional step of telling the system logger where to place the received DB2 audit data. The DB2 Knowledge Center has some basic information about how to configure the system error and event log (syslog). Without any changes it is possible to dump the audit data to, e.g., "/var/log/messages". I wanted the records go to a separate file. Because my system has rsyslog installed, I needed to edit (as root) the file "/etc/rsyslog.conf". Adding the following line causes all "user"-related records to be written to "user_messages.log" in the directory "/var/log/db2":
user.* /var/log/db2/user_messages.log
It is important to create that directory and file (I used "mkdir" and "touch"), then to restart the syslog facility.
DB2 Audit Logs to Syslog
Once done with the setup I connected to my test database and executed several SQL statements, including a "select * from eastereggs" (a non-existing table). Then I deemed my system ready for moving a first batch of audit records over to syslog. If a buffer for the DB2 audit data is used, it needs to be flushed:
db2audit flush
Thereafter, all the current audit logs need to be archived. This can be done for both the instance and for databases. The following archives the logs for my test database and writes the file to the configured archive path (or the default path if none is specified):
db2audit archive database hltest
After all the configuration and preparation, we are finally at the really interesting part, the new extract option. Using "syslog" as destination and the category "user" with the priority level "info", the audit logs are handed over to the system error and event logger:
db2audit extract syslog user.info from files /tmp/db2audit.*
Did the logs really make its way over from DB2 to the system infrastructure? Here is my successful test:
[hloeser@mymachine ~]$ sudo grep -i easter /var/log/db2/user_messages.log Apr 2 13:32:10 mymachine db2audit: timestamp=2015-04-02-13.31.09.089507; category=CONTEXT; audit event=PREPARE; event correlator=40; database=HLTEST; userid=hloeser; authid=HLOESER; application id=*LOCAL.hloeser.150402095529; application name=db2bp; package schema=NULLID; package name=SQLC2K26; package section=201; text=select * from eastereggs; local transaction id=0x3266020000000000; global transaction id=0x0000000000000000000000000000000000000000; instance name=hloeser; hostname=mymachine;
Happy Easter and hopefully some chocolate eggs are left for you!
Labels:
administration,
DB2,
fixpack,
IT,
performance,
privacy,
security,
system management,
version 10.5
Thursday, November 20, 2014
Useful DB2 administrative functions and views
Did you know that there are about 80 (eight-zero) administrative views in the SYSIBMADM schema in DB2 that are ready for use? I have used several of them and also looked into the documentation, but 80 is quite a lot. (Almost) All of them are documented in the DB2 Knowledge Center in the "Built-in routines and views" section.
The routines live in the SYSPROC schema, administrative views can be found in the schema SYSIBMADM. Given that insight it is easy to construct a simple query to find all available views:
SELECT viewname from syscat.views where viewschema='SYSIBMADM'
Depending on your version and fixpack level of DB2 the result will vary. Speaking of fixpack level, do you know how to find out what your system is running by using SQL? The view ENV_INST_INFO may help in that case because it returns instance-related information such as the instance name, the DB2 version, fixpack, and build level:
SELECT * FROM SYSIBMADM.ENV_INST_INFO
Are you connected to, e.g., an Advanced Workgroup Server Edition (AWSE) of DB2 or an Enterprise Server Edition (ESE)? Find out by querying the product information using the view ENV_PROD_INFO. It returns the installed product, the kind of active licenses, and more:
SELECT * FROM SYSIBMADM.ENV_PROD_INFO
Next in the list of useful views with system information is ENV_SYS_INFO. It can be utilized to find out more about the operating system, the type of hardware, installed CPU and memory, etc.:
SELECT * from SYSIBMADM.ENV_SYS_INFO
Last, but not least in my list of views with basic system information are DBMCFG and DBCFG. As the name implies can these views help to retrieve the current instance (database manager / dbm) or the current database (db) configuration. So it is easy to find out whether the self-tuning memory manager (STMM) is active or where diagnostic logs are stored.
That's it for today, I am back to playing with more of those views (and routines)...
The routines live in the SYSPROC schema, administrative views can be found in the schema SYSIBMADM. Given that insight it is easy to construct a simple query to find all available views:
SELECT viewname from syscat.views where viewschema='SYSIBMADM'
Depending on your version and fixpack level of DB2 the result will vary. Speaking of fixpack level, do you know how to find out what your system is running by using SQL? The view ENV_INST_INFO may help in that case because it returns instance-related information such as the instance name, the DB2 version, fixpack, and build level:
SELECT * FROM SYSIBMADM.ENV_INST_INFO
Are you connected to, e.g., an Advanced Workgroup Server Edition (AWSE) of DB2 or an Enterprise Server Edition (ESE)? Find out by querying the product information using the view ENV_PROD_INFO. It returns the installed product, the kind of active licenses, and more:
SELECT * FROM SYSIBMADM.ENV_PROD_INFO
Next in the list of useful views with system information is ENV_SYS_INFO. It can be utilized to find out more about the operating system, the type of hardware, installed CPU and memory, etc.:
SELECT * from SYSIBMADM.ENV_SYS_INFO
Last, but not least in my list of views with basic system information are DBMCFG and DBCFG. As the name implies can these views help to retrieve the current instance (database manager / dbm) or the current database (db) configuration. So it is easy to find out whether the self-tuning memory manager (STMM) is active or where diagnostic logs are stored.
That's it for today, I am back to playing with more of those views (and routines)...
Labels:
administration,
dashdb,
DB2,
IT,
knowledge center,
monitoring,
sql,
system management,
version 10,
version 10.5
Monday, October 15, 2012
Updated Redbook for DB2 10: High Availability and Disaster Recovery Options
Well, there is not much to say about this existing Redbook that has been updated to reflect DB2 10.1 for Linux, UNIX, and Windows and current technologies. The "High Availability and Disaster Recovery Options for DB2 for Linux, UNIX, and Windows" Redbook describes and explains technologies like IBM Tivoli TSA, PowerHA SystemMirror, Microsoft Windows Failover Cluster, WebSphere Q Replication or InfoSphere CDC.
With close to 600 pages it also requires your high availability...
With close to 600 pages it also requires your high availability...
Labels:
availability,
best practices,
database,
DB2,
IT,
pureScale,
redbook,
system management,
version 10
Friday, June 17, 2011
DB2 Merge Backup: When some deltas make a full
I sometimes teach Data Management at university and one topic is backup strategies. We then discuss what is needed for a point-in-time recovery and what can be done to minimize the time needed for the recovery process. Full backups, incremental backups, delta backups, etc. are things to consider. Well, in a production environment having adequate maintenance windows to periodically take full backups, even online backups, could be a problem.
Some days ago DB2 Merge Backup become available. It combines incremental and delta backup to compute a full backup, so that taking such a full backup can be avoided. I just checked the product web page and a trial version is available. System requirements are DB2 LUW 9.5 or DB2 9.7 and it runs on most platforms.
Some days ago DB2 Merge Backup become available. It combines incremental and delta backup to compute a full backup, so that taking such a full backup can be avoided. I just checked the product web page and a trial version is available. System requirements are DB2 LUW 9.5 or DB2 9.7 and it runs on most platforms.
Monday, January 17, 2011
Current DB2 Fixpacks
The year is still young, but I already had a business trip last week. When looking at/after your systems, planning new ones, or just teaching or discussing DB2, it is good to know where we are in current fixpack levels.
The page "DB2 Fix Packs by version for DB2 for Linux, UNIX, and Windows" lists all of them from version 8.2 to 9.8. As today, DB2 9.5 is at FP7 (released December 13th, 2010), DB2 9.7 at FP3a (released October 20th, 2010), and DB2 9.8 (this is the pureScale feature) is at FP3 (released December 17th, 2010). As you can see, two fixpacks for the recent DB2 versions came out just before the holidays.
The page "DB2 Fix Packs by version for DB2 for Linux, UNIX, and Windows" lists all of them from version 8.2 to 9.8. As today, DB2 9.5 is at FP7 (released December 13th, 2010), DB2 9.7 at FP3a (released October 20th, 2010), and DB2 9.8 (this is the pureScale feature) is at FP3 (released December 17th, 2010). As you can see, two fixpacks for the recent DB2 versions came out just before the holidays.
Tuesday, December 7, 2010
How long does it take to build workload optimized systems?
And I don't mean how long it takes to assemble one...
Friday, September 3, 2010
Obtaining information about the installed DB2 version (level)
Somehow, I had a mental blackout earlier today when I tried to obtain information about the installed DB2 version and couldn't remember the command. Of course, you can always look into the db2diag.log file. Because DB2 writes an entry with version information and data about the system it is running on to that diagnostic file whenever it starts up.
However, the command I couldn't come up with is db2level. This command prints out the same information. And then, for those who need to obtain that information using plain SQL, they can utilize a special administrative view, ENV_INST_INFO.
How does the output look like?
From my db2diag.log:
DATA #1 : Build Level, 128 bytes
Instance "hloeser" uses "32" bits and DB2 code release "SQL09071"
with level identifier "08020107".
Informational tokens are "DB2 v9.7.0.1", "s091114", "IP23033", Fix Pack "1".
Output from db2level:
DB21085I Instance "hloeser" uses "32" bits and DB2 code release "SQL09071"
with level identifier "08020107".
Informational tokens are "DB2 v9.7.0.1", "s091114", "IP23033", and Fix Pack
"1".
Product is installed at "/opt/ibm/db2/V9.7".
I spare you the output from "select * from sysibmadm.env_inst_info". And why did I want to look at it? I was checking whether I already had applied the fixpack 2.
However, the command I couldn't come up with is db2level. This command prints out the same information. And then, for those who need to obtain that information using plain SQL, they can utilize a special administrative view, ENV_INST_INFO.
How does the output look like?
From my db2diag.log:
DATA #1 : Build Level, 128 bytes
Instance "hloeser" uses "32" bits and DB2 code release "SQL09071"
with level identifier "08020107".
Informational tokens are "DB2 v9.7.0.1", "s091114", "IP23033", Fix Pack "1".
Output from db2level:
DB21085I Instance "hloeser" uses "32" bits and DB2 code release "SQL09071"
with level identifier "08020107".
Informational tokens are "DB2 v9.7.0.1", "s091114", "IP23033", and Fix Pack
"1".
Product is installed at "/opt/ibm/db2/V9.7".
I spare you the output from "select * from sysibmadm.env_inst_info". And why did I want to look at it? I was checking whether I already had applied the fixpack 2.
Tuesday, January 12, 2010
About my watch and your workloads or projects....
It seems like I have a follow-up to my "About my pants and your systems". This weekend I was kind of dosing off in a train of thoughts when I noticed my wrist watch. It is very lightweight and I usually don't notice it while wearing this watch. The design is very simple, just 3 hands (hour, minute, second) and a date indicator. I also have a more chunky sports watch with an atomic clock, solar power and a dominating digital display and some other watches.
When wearing that sports watch I was focusing so much on the precise time ('cause it's receiving the signal from the atomic clock) that I always tried to be exactly on time. However, those who I met or the trains I tried to catch did not. With my lightweight watch on I am more relaxed and for meetings and other occasions I try to follow the "spirit" of invitations.
Anyway, what I noticed in the IT world is that in many cases IT professionals try to be overly exact. It started when studying computer science that every word of a description was taken into full account and it still is something I have to deal with during software development (and deployment). In our profession in too many cases we try to aim for the best/optimal/fastest solution, but in how many situations would have something less served equally well? We could have used that energy and motivation for another problem (or gone home earlier).
How many of your workloads are overly optimized or overly precise? Do you compute exact values or do you sample to get current trends? Are workloads still important that they have to run with highest priority or can they be moved to other categories? Do you focus too much on details (and loose the big picture)? Are you relaxed...?
(Image taken from here)
When wearing that sports watch I was focusing so much on the precise time ('cause it's receiving the signal from the atomic clock) that I always tried to be exactly on time. However, those who I met or the trains I tried to catch did not. With my lightweight watch on I am more relaxed and for meetings and other occasions I try to follow the "spirit" of invitations.
Anyway, what I noticed in the IT world is that in many cases IT professionals try to be overly exact. It started when studying computer science that every word of a description was taken into full account and it still is something I have to deal with during software development (and deployment). In our profession in too many cases we try to aim for the best/optimal/fastest solution, but in how many situations would have something less served equally well? We could have used that energy and motivation for another problem (or gone home earlier).
How many of your workloads are overly optimized or overly precise? Do you compute exact values or do you sample to get current trends? Are workloads still important that they have to run with highest priority or can they be moved to other categories? Do you focus too much on details (and loose the big picture)? Are you relaxed...?
(Image taken from here)
Sunday, January 10, 2010
About my pants and your systems...
Happy New Year everyone, I hope you had a good start into the new year. After I was first stuffed a couple of times by my mother-in-law, then by my mother, my wife and I had an afternoon for ourselves. We decided to go shopping for clothes. Well, as you might guess by this time, it turned out to be no such a great idea in some (few?) aspects.
I tried on some pants and then realized that just after the holidays is not a good point in time to tell whether pants will fit comfortable for the rest of the year. I only could judge what pants would definitely not fit me for sure. On the plus size (no pun intended) I made the link to the IT world:
- Do your systems have "fat" that you need to trim to have them run comfortably again? Do you have procedures in place to evaluate your systems from time to time? Do you even proactively trim your systems?
(I remember the regular "file system is full, please delete your stuff" emails from various jobs) - How do you size your production systems in terms of storage capacity or CPU power? What is the peak performance you can expect your system to handle comfortably? Do you have procedures in place to re-evaluate your system capacity from time to time and to upgrade them if necessary?
- How do you test new systems or software? What is good enough in terms of stress testing? When can you be sure that your system can deal with peak loads? Did you plan for the "unexpected"?
- Your systems maybe could have had a good TCO when you implemented them, but that cost analysis could be outdated by now. Do you evaluate alternatives on a regular basis? Are you ready to make a switch?
(Picture taken from here)
Subscribe to:
Posts (Atom)