When I announced last week that we are putting together a Usage Tracking project, I assumed that people would know WHY. To me it seemed obvious, having implemented this type of system for the last 15 years that I have worked in OBIEE (nee Siebel Analytics).
So, Why do we need a usage tracking ‘Warehouse’ solution when there are already database tables in place?
- You can Identify Who is not using the System,
- You can see more information about your users – i.e. not just their Username, things like What Department they are in, What Rank, What Country.
- You can group the results into sets, e,g, User Type Totals or Weekly Averages vs Last 6 Months Average
- The results can be tuned. (see note below)
- You can show any correlation between Data Quality and Dashboard Usage
- You can mix different environments, projects or versions.
Tuning Capture and Use of Tracking Data.
On a project I work on there are literally millions of Usage Records. Imagine if you are reporting on 20 million records using the out of the box table, while that table is being populated by live user activity!
How can you set query attributes for the table whilst not slowing it down for write? The answer is, you don’t.
Take all indexes off the std Usage Tracking Tables (e.g S_NQ_ACCT), allow logging. You can even put this in another database.
Each night (or more regularly if you prefer) move the data out of the S_NQ_ACCT table and put into a staging area. From there we populate Dimensions and Facts. Then update any aggregates. Apply Stats (stored or refresh). Refresh Materialized views.
This is where this project will focus first. Create target dimensions, those that can be populated from Usage Tracking, Those from User Tables, Those from People tables (for Dept etc), Dates (as shown in part 1) We then create facts and aggregates.
Using ODI we then build the ETL to load those tables.
Remember, if you want to join in, just email me or linkedin Message. We have four so far so room for more.
Blogs of the week
Sunil says, “Recently while working with a customer we ran into an interesting situation concerning file compression and processing time. For a system like Hadoop, file compression has been always a good way to save on space, especially when Hadoop replicates the data multiple times.”
Read the full blog for more details.
Cameron Lackpour talks about allocations of the most simple kind in this blog post. He says, ” To set the stage, this allocation post will:
Use a Hybridized version of Sample.Basic
Allocate the indirect Budget cost Distribution from a pool amount
Use a driver based on Actual Sales level 0 Product and Market as a percentage of Actual Sales Total Product and Total Market
I (and the world+dog of Essbase developers) have written allocations like this since the Year Dot. I purposely chose this example because it is so basic and yet pervasive. Never underestimate how easy Essbase makes this sort of logic. I think sometimes we lose track of the still-revolutionary functionality of Essbase.”
Andrejus Baranovski blogs, “Combo box LOV with filtering is simple, but effective feature not used often. You should rethink LOV design and apply combo box LOV with filtering where possible. Advantage of such LOV implementation is pretty clear – fast list items access, based on filter criteria. This will reduce number of times required to open LOV popup and improve performance of UI.”
Jeff Smith writes, “I was remarking to someone this morning how easy I thought it was to setup a job, say on the first and third Wednesday of the month. The flexibility of DBMS_SCHEDULER, even without taking into account the power of schedules, is really powerful.”
Connor McDonald blogs, “In the previous post, I mentioned that for a programming environment fetching rows from the database, then the method of
- open a ref cursor
- issue a fetch call
- close the ref cursor
might not be appropriate for those situations where the result set is known be a single row (eg primary key lookup). A better option might be to call a procedure and get those outputs as parameters. And I broke a cardinal rule… I effectively said “Here’s something that I know to be true…so you should just trust me“. Dear oh dear oh dear. That’s not good. So without further ado, it’s time to put together a test case.”
Danny Byrant says, “As usual, I’m goofing off in my home Data Center (riiiiight), and I decide to install Oracle Linux 7 and the 12c Database. While not concerning myself with what’s on the Compatibility Matrix, I go ahead and start up Virtual Box 5 (recently released) and install my O/S. No problem there so, I’m thinking everything is fine. Time to install the database. At this point, I’m doing a basic install, so we are just going the follow the installation steps … very generic. Then I get this:…”
Read the full blog for details.
Paul Cannon says, “Today I’m taking a look at installing BICS DataSync – a utility for loading data into an Oracle BICS (Business Intelligence Cloud Service) database. In a second blog – to follow – I’ll use it to transfer up some data.”
Part 2 is also available to view here.
8. Truncate – 2
In the second part of a 2 part blog, Jonathan Lewis says, “Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum – “Why isn’t a commit required for DDL?””
This week on Twitter
Paul Tudor Oprea tweeted that the mastermind of OBIEE has passed away
Steve Yeung asked What’s new in this OBIEE version?
Kent Graziano shared Microsoft And Salesforce Join In $5.3 Billion Buyout Of Informatica
This week on LinkedIn
Paul Anderson shared Business Analytics Monthly Index – July 2015
Videos such as: Using ODI to Load Essbase Data
and Changing gears – boat one mods