Build Something Awesome: ODTUG GeekAThon

I included the content of this post in my Week In Review, but I think it deserves a separate post altogether.  ODTUG GeekAThon is the first contest of this type that ODTUG has conducted, so let’s make it a success.  This way we get more contests, more awesome ideas, and more prizes.

So what’s this contest about?  I don’t want to re-cap the entire GeekAThon website, but essentially they want you to build something out of the Beacon that you received at Kscope16.  Using the Bluetooth Low Energy technology of the Beacon…solve a problem.  Then show us how to do the same things.  Go check out the site to find out more:

GeekAThon Announcement

GeekAThon Competition Website

A few important dates for those of you interested:

You must register by August 15, 2016, at 11:59 PM Pacific Time.

You must submit your video entry by September 23, 2016, 11:59 PM Pacific Time.

You must submit your solution document by September 23, 2016, 11:59 PM Pacific Time.

And for those of you who need a visual to assist in your procrastination, the red are the days past as of this post.  The green is the day you should be ready for, but will likely need to work right up until 11:59 to meet. 😉

Calendar2016080108

Calendar2016080109

Now go build something awesome!


Hyperion EPM Week In Review: August 8, 2016

HyperionEPMWeekInReviewIt was another light week in the Hyperion EPM blog community.  Oracle however, did not help out with any new product releases.

Hyperion EPM Patches and Updates:

None…

Hyperion EPM Blog Posts:

I posted about converting your applications to Hybrid Essbase this week.

Vijay tells us about a error in the Oracle documentation for DRM.  I’m certain you are all surprised that any of the Oracle docs could have mistakes.  No?  I guess I’m not either.

Celvin shows us where Oracle hid the Migration button for environments where no application has yet been created.  He also has a great post on making LCM content readable.  Given the movement to the cloud and lack of accessibility to the repository in the future…this is great content.  And it should work on-prem as well…if you were no inclined.

Sibin tells us about the Essbase application log parser he is working on.  He then tells us that it is actually available on GitHub.  Go break it for him!

John has a great post on cloud to cloud data movement.  This has a massive amount of potential as clients start to have FCCS and multiple instances of PBCS.

Jason continues his blog series on Dodeca.  This time he shows us how to use SQL and Essbase in a single view.

Cameron has a great post on dynamic load rule columns in Essbase load rules.  Looks like more to come soon as well.

Tim has a message to all of you OTDUG community members.  Read it…and decide what you want to do.  Will you be the next members of the board?  Will you apply for the leadership program?

ODTUG GeekAThon

Another quick shout out for the ODTUG GeekAThon.  For those of us real geeks out there, they have a competition worthy of the maker community.  You can find the announcement here.  You can find the competition website here.  And a quick re-cap:

GeekAThon Announcement

GeekAThon Competition Website

A few important dates for those of you interested:

You must register by August 15, 2016, at 11:59 PM Pacific Time.

You must submit your video entry by September 23, 2016, 11:59 PM Pacific Time.

You must submit your solution document by September 23, 2016, 11:59 PM Pacific Time.

And for those of you who need a visual to assist in your procrastination, the red are the days past as of this post.  The green is the day you should be ready for, but will likely need to work right up until 11:59 to meet. 😉

Calendar2016080108

Calendar2016080109

Now go build something awesome!


Hybrid Essbase: Rapidly Make Parents Dynamic Calc

Hybrid Essbase is the biggest advancement in Essbase technology since ASO was released.  It truly takes Essbase to another level when it comes to getting the best out of both ASO and BSO technology.  Converting your application from BSO to Hybrid can be a long process.  You have to make sure that all of your calculations still work the way they should.  You have to make sure that your users don’t break Hybrid mode.  You have to update the storage settings for all of your sparse dimensions.

I can’t help you with the first items, they just take time and effort.  What I can help you with is the time required to update your sparse dimensions.  I spend a lot of time hacking around in the Planning repository.  I suddenly found a new use for all of that time spent with the repository…getting a good list of all of the upper level members in a dimension.  If we just export a dimension, we get a good list, but we have to do a lot of work to really figure out which members are parents and which are not.  Luckily, the HSP_OBJECT table has column that tells us just that: HAS_CHILDREN.

Microsoft SQL Server

The query to do this is very, very simple.  The process for updating your dimensions using the query takes a little bit more explanation.  We’ll start with SQL Server since that happens to be where I’m the most comfortable.  I’m going to assume you are using SQL Server Management Studio…because why wouldn’t you?  It’s awesome.  Before we even get to the query, we first need to make a configuration change.  Open Management Studio and click on Tools, then Options.

HybridEssbase01

Expand Query Results, then expand SQL Server, and then click on Results to Grid:

HybridEssbase02

Check the box titled Include column headers when copying or saving the results and click OK.  Why did we start here?  Because we have to restart Management Studio for the new setting to actually take affect.  So do that next…

Now that we have Management Studio ready to go, we can get down to the query.  Here it is in all of its simplicity:

SELECT
	o.OBJECT_NAME AS Product
	,po.OBJECT_NAME AS Parent
	,'dynamic calc' AS [Data Storage (Plan1)]
FROM
	HSP_OBJECT o
INNER JOIN
	HSP_MEMBER m ON m.MEMBER_ID = o.OBJECT_ID
INNER JOIN
	HSP_DIMENSION d ON m.DIM_ID = d.DIM_ID
INNER JOIN
	HSP_OBJECT do ON do.OBJECT_ID = d.DIM_ID
INNER JOIN
	HSP_OBJECT po ON po.OBJECT_ID = o.PARENT_ID
WHERE
	do.OBJECT_NAME = 'Product'
	AND o.HAS_CHILDREN = 1

We have a few joins and a very simple where clause.  As always, I’m using my handy-dandy Vision demo application.  A quick look at the results shows us that there are very few parents in the Product dimension:

HybridEssbase03

Now we just need to get this into a format that we can easily import back into Planning.  All we have to do it right-click anywhere in the results and click on Save Results As….  Enter a file name and click Save.

HybridEssbase05

Now we should have a usable format for a simple Import to update our dimension settings.  Let’s head to workspace and give it a shot.  Fire up your Planning application and click on Administration, then Import and Export, and finally Import Metadata from File:

HybridEssbase06

Select your dimension from the list and then browse to find your file.  Once the file has uploaded, click the Validate button.  This will at least tell us if we have a properly formatted CSV:

HybridEssbase07

That looks like a good start.  Let’s go ahead and complete the import and see what happens:

HybridEssbase08

This looks…troubling.  One rejected record.  Let’s take a look at our logs to see why the record was rejected:

HybridEssbase09

As we can see, nothing to worry about.  The top-level member of the dimension is rejected because there is no valid parent.  We can ignore this and go check to see if our changes took affect.

HybridEssbase10

At first it looks like we may have failed.  But wait!  Again, nothing to worry about yet.  We didn’t update the default data storage.  We only updated Plan1.  So let’s look at the data storage property for Plan1:

HybridEssbase11

That’s more like it!

Oracle Database

But wait…I have an Oracle DB for my repository.  Not to worry.  Let’s check out how to do this with Oracle and SQL Developer.  First, let’s take a look at the query:

SELECT
	o.OBJECT_NAME AS Product
	,po.OBJECT_NAME AS Parent
	,'dynamic calc' AS "Data Storage (Plan1)"
FROM
	HSP_OBJECT o
INNER JOIN
	HSP_MEMBER m ON m.MEMBER_ID = o.OBJECT_ID
INNER JOIN
	HSP_DIMENSION d ON m.DIM_ID = d.DIM_ID
INNER JOIN
	HSP_OBJECT do ON do.OBJECT_ID = d.DIM_ID
INNER JOIN
	HSP_OBJECT po ON po.OBJECT_ID = o.PARENT_ID
WHERE
	do.OBJECT_NAME = 'Product'
	AND o.HAS_CHILDREN = 1

This is very, very similar to the SQL Server query.  The only real difference is the use of double quotes instead of brackets around our third column name.  A small, yet important distinction.  Let’s again look at the results:

HybridEssbase12

The Oracle results look just like the SQL Server results…which is a good thing.  Now we just have to get the results into a usable CSV format for import.  This will take a few more steps, but it still very easy.  Right click on the result set and click Export:

HybridEssbase13

Change the export format to csv, choose a location and file name, and then click Next.

HybridEssbase14

Click Finish and we should have our CSV file ready to go.  Let’s fire up our Planning application and click on Administration, then Import and Export, and finally Import Metadata from File:

HybridEssbase06

Select your dimension from the list and then browse to find your file.  Once the file has uploaded, click the Validate button.  This will at least tell us if we have a properly formatted CSV:

HybridEssbase15

 

Much like the SQL Server file, this looks like a good start.  Let’s go ahead and complete the import and see what happens:

HybridEssbase16

Again, much like SQL Server, we have the same single rejected record.  Let’s make sure that the same error message is present:

HybridEssbase09

As we can see, still nothing to worry about.  The top-level member of the dimension is rejected because there is no valid parent.  We can ignore this and go check to see if our changes took affect.

HybridEssbase10

As with SQL Server, we did not update the default data storage property, only Plan1.  So let’s look at the data storage property for Plan1:

HybridEssbase11

And just like that…we have a sparse dimension ready for Hybrid Essbase.  Be sure to refresh your database back to Essbase.  You can simply enter a different dimension name in the query and follow the same process to update the remaining sparse dimensions.

Celvin also has an excellent Essbase utility that will do this for you, but it makes use of the API and Java, and generally is a bit more complicated than this method if you have access to the repository.  So what happens if you can’t use the API and you can’t access the repository?  We have another option.  We’ll save that for another day, this blog post is already long enough!


Hyperion EPM Week In Review: August 1, 2016

HyperionEPMWeekInReviewIt was a light week in the Hyperion EPM blog community.  Oracle helped out a bit with some updates.

Hyperion EPM Patches and Updates:

Tax Provision 11.1.2.4.103 has been released.  As previously mentioned, totally out of my area of knowledge.

The Oracle support site was updated this week.  You can find out more here.

Kash sent out updates for both PBCS and FCCS.  Data management was added to FCCS, so that’s pretty huge.  The PBCS updates aren’t too earth-shattering.

cubeSavvy is officially production-ready!  Harry has released version 6.0.0.  Get your web-based Essbase going in production!

Hyperion EPM Blog Posts:

Sarah has a guide on installing Oracle 12c on Windows.  Her sample covers the multi-tenant version of 12c.  If you’d like a single-tenant version, you can always check out Jake’s post here.  She also has a post on BICS data sync.  Both very thorough posts.

Tim reflects on his time spent at Kscope and the preparations that go into it.  I’d encourage everyone to read his comments and consider submitting an abstract for Kscope17.

Francisco has an very in-depth article about using the on-prem REST API’s for Planning to load meta-data.  One word of caution…this is still unsupported as I understand it.  That doesn’t mean it won’t work, just that they might change the API without notice.

Cameron continues his Kscope in Snaps series.  I really should take more pictures at these things…

Dmitry has an update to his Essbase Scrambler.  This is an awesome tool and he gives us a few API tips along the way.

Gary has a helpful tip for those of us using Firefox as our Hyperion EPM browser of choice.

Harry not only had his production release of cubeSavvy, but also a few beta releases this week: 9.6 and 9.7.

ODTUG GeekAThon

Another quick shout out for the ODTUG GeekAThon.  For those of us real geeks out there, they have a competition worthy of the maker community.  You can find the announcement here.  You can find the competition website here.  And a quick re-cap:

GeekAThon Announcement

GeekAThon Competition Website

Now go build something awesome!


Hyperion EPM Week In Review: July 23, 2016

HyperionEPMWeekInReviewWelcome to another Monday edition of the Hyperion EPM Week In Review.  Let’s just jump right in…

Hyperion EPM Patches and Updates:

Disclosure Management 11.1.2.3.820 has been released.  This one is way out of my wheel-house, so…

OBIEE 11.1.1.9.160719 has also been released.  This is an extrememly long patch number and includes a extremely long list of bug fixes across the BI stack.

Not to be outdone…OBIEE 11.1.1.7.160719 has been released with its 11.1.1.9 sibling.

And just to round out the entire OBIEE family, 12.2.1.0.160719 was released as well.

Hyperion EPM Blog Posts:

Opal has several posts this week.  First, a review of Tony’s book, The Definitive Guide to FDMEE.  Next up is an ODTUG post about Jorge Rimblas.  Jorge is an Oracle ACE with a heavy interest in photography.  Heading back to the cloud she has a pair of posts on finding your version number and enabling data management in FCCS.

Celvin shows us the details of a pretty big bug in the date difference functions built into Calc Manager.  The good news is that there is a patch.

Sarah has a guest blog post by Teal Sexton about OBIEE and APEX integration.  Perhaps I need a guest blogged?  Anyone interested? 😉

Tim has an excellent write-up on adding a dimension to an ASO cube.  Pretty cool idea.  I’m sure I’ll make use of this before too long.

Jason has part 4 and part 5 of his on-going data intput with Dodeca series.  He covers focus calculations and relational database input.

Kyle continues his bromance with Jake over a in2hyperion.  Kidding…kidding.

Vijay has some great automation samples doing exports with variables.  This should come in handy for some automated backups.  It also sounds like he has more coming soon.

Robert combines some FDMEE, MaxL, and Jython for those needing to do data clearing in ASO models.

ODTUG GeekAThon

ODTUG announced a competition this week.  For those of us real geeks out there, they have a competition work of the maker community.  You can find the announcement here.  You can find the competition website here.  And finally, you can register for a lunch and learn this Friday, July 29th at noon CST here.  And a quick re-cap:

GeekAThon Announcement

GeekAThon Competition Website

GeekAThon Lunch and Learn Webinar Registration

Now go build something awesome!

Finals Words

That’s it for this week!  We’ll keep going with Monday posts as they seem to be working out pretty well.


My First FreeNAS: Part 1 – Build and Burn-In

Kscope16 is over, my parts have arrived, and its finally time to start my FreeNAS build.  Today I’m going to run through my actual build process and the start of my burn-in process.  Let’s start with the hardware…what did I order again?

  • SuperChassis 846TQ-R900B
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2

An here’s the pile of goodies:

Build01

I always start with the motherboard by itself:

Build02

Next up…the CPU(s):

Build03

CPU close-up:

Build04

Before we install the heatsinks, let’s install the memory.  The heatsinks are petty big and have a habit of getting in the way:

Build05

That’s a lot of memory…how about a close-up:

Build06

Now we can install the heatsinks:

Build07

Like I said…huge:

Build08

Now that we have all of the core components in place on the motherboard, let’s put it into our case:

Build09

Obviously, we have a quite a few other components to add (hard drives, add-in cards, etc).  But for now, I like to keep it simple for the burn-in process.  So how do we go about that?  For the basic hardware, there are two recommended steps.  Because memory is so important to FreeNAS, we have to make sure that our memory is in good working order.  For those of us purchasing used hardware, this is especially critical.  Once we have the memory tested, we will then test out our CPU’s to make sure that they are functional and to take a look at the temperatures.

So how do we do this?  You can download utilities like memtest86+ or cpustress and boot up directly using those tools.  But, being that I’m averse to additional work that has already been done by someone else, I just downloaded the latest Ultimate Boot CD.  This comes with a mega-ton of tools including the two I need to start with:  memtest86+ and cpustress.

You can download the ISO here.  Once you have downloaded the ISO, you have two choices.  You can use one of my favorite tools, Rufus, to burn the ISO to a USB thumb drive.  Then you can just boot from the thumb drive.  Your second option is the preferred option.  Hopefully you purchased server-class hardware for your FreeNAS box and that hardware has IPMI and Remote KVM.  If so, then you will likely be able to mount the ISO over the network and easily boot from the virtual media.  This is the option I went for.

My Supermicro board even has two options for this option (options on top of options!).  You can do this through the IPMI interface and mount an ISO from a share or you can use the iKVM to mount the ISO.  Connect to your server with iKVM and select Virtual Media and then Virtual Storage.

iKVM01

Switch to the CDROM&ISO tab, select ISO File from the drop-down, and click Open Image:

iKVM02

Select the ultimate boot CD image name and click Open:

iKVM03

Finally click Plug In:

iKVM04

Once we reboot (or boot up, if you have no other OS installed, it should just boot right in):

UBCD01

We’ll go down to Memory and select Memtest86+:

UBCD02

Memtest86+ is a somewhat newer release of a really old memory testing utility I have used for over a decade: Memtest86.  This release takes the older code and brings support for newer hardware and fixes a number of bugs.  Even still, it is pretty old.  It also takes a long…long time to run with 256GB of memory.  So I ran a single pass to start:

Build10

Once that first pass completed (roughly 24 hours), I focused in on stressing my CPU’s.  For this I used cpustress, also included in the Ultimate Boot CD.  I’m less familiar with this stressing tool, as I’ve always been more Windows focused and used tools like Prime95 for this purpose.  Again we boot into the Ultimate Boot CD:

UBCD01

This time we’ll select CPU and then CPUstress:

UBCD03

CPUstress should start up automatically:

UBCD04

This gives us one more menu…I just went with option 1:

UBCD05

Overall, it seems to work pretty well:

Build11

Now with that running lets take a look at the CPU temps:

Build13

The temps look pretty good for running wide open.  There appears to be headroom for the additional heat that will be generated by the hard drives that will be added to the system.  So how does power usage look?

Build12

The numbers look pretty good here.  Again…no drives, so this number will go up considerably by the time we are completely done.  I burned the CPU’s in for a little over 24 hours and then went back to Memtest86+.  I ran that for roughly four more days with no errors.  That’s all for today.  In our next post we’ll finally load up FreeNAS, get our controllers ready to go, and burn in our hard drives.


The EPM Week In Review: Week Ending July 16, 2016

This week, Oracle was kind enough to give us a few patches while the EPM community in general is still recovering from Kscope16.  We still have some great content this week, so let’s check it out.

Patches and Updates:

HFM 11.1.2.4.202 was released.  This appears to be general bug fixes.

Speaking of bug fixes, Essbase 11.1.2.4.011 was released.  No new functionality, but hopefully your bug was included.

For the shrinking population of 11.1.2.3 customers, the Tax Provision 11.1.2.3.702 patch was released.

Gary has another new release of his SV++ tool.  Check it out here.

New Blog Posts:

I posted a tip on reloading the Planning cache without requiring a restart.  Pretty cool tip…wish I had figured it out.  Special thanks again to Tjien Lie for providing the code!

Sibin tells us all about handling NULL values in FDMEE.  While on the subject of FDMEE, he also provides us a magic decoder ring for multi-period log files.

Jason has a pair of posts as he continues his on-going series on Dodeca.  He has part 3 of this series, and a related post about relational data.

Pete has a wrap-up of Kscope16, but more importantly, he shows off an awesome new feature in PBCS: automatically updated smart lists based on dimensions.  I can’t wait to see this one on-prem.  Luckily I have more and more cloud clients so I get to play with this stuff.

Sarah as a great introduction to BICS.  This starts from the very beginning…the login screen!  She continues on to the home dashboard.

Speaking of Sarah, she gets a shout-out from the DEVEPM crew.  They have a post dedicated to the ODTUG Leadership Program.

More cloud functionality that doesn’t work on-prem yet is brought to you by John Goodwin.   He has a post on loading non-numerical data using FDMEE.  I know I say this every time…but great stuff!

Doug has a recap of CloudScope…I mean Kscope16.  He also discusses the roadmap to FDMEE. <<SAFE HARBOR>>

Gary has a pair of posts this week.  He started with some Smart View performance tips that may help those of you using Office 2013 and 2016.  He also has a post on the previously mentioned new version of SV++.


Changing the Planning Repository without Restarting Planning

One of the long-running tenets of working with the Planning repository is that you must restart Planning to see your changes.  I’ve always heard that there were ways around this, but Oracle hasn’t ever been forthcoming with the specifics of how to make that happen.  Finally, at Kscope16 during my presentation on the Planning Repository, someone in the audience by the name of Tjien Lie had the code from Oracle to make this happen.  Before I get to that, let’s start with a primer on the HSP_ACTION table.  I would provide one, but John Goodwin did such a great job, so I’ll just point you to his post here.

<<<wait for the reader to go to John Goodwin’s site and read the information and come back, assuming they don’t get distracted by his wealth of amazing content>>>

Ok…now that you understand what the HSP_ACTION table does, how do we use it differently than John uses it?  By differently, I mean I don’t wan to insert specific rows and update specific things.  That seems like a lot of work.  Instead, why not just have the HSP_ACTION table update the entire cache for us?  Let’s give it a shot.  First I’m going to go add a dimension to my Vision application:

UpdateCache01

Now let’s make sure the dimension shows up:

UpdateCache02

Any now let’s delete it:

DELETE FROM HSP_MRU_MEMBERS WHERE DIM_ID IN (SELECT OBJECT_ID FROM HSP_OBJECT WHERE OBJECT_NAME = 'ToBeDeleted')

DELETE 
FROM 
	HSP_UNIQUE_NAMES 
WHERE 
	OBJECT_NAME IN (
SELECT
	OBJECT_NAME
FROM
	HSP_MEMBER m
INNER JOIN
	HSP_OBJECT o ON o.OBJECT_ID = m.MEMBER_ID
WHERE
	DIM_ID IN (SELECT OBJECT_ID FROM HSP_OBJECT WHERE OBJECT_NAME = 'ToBeDeleted'))

SELECT
	OBJECT_ID INTO #DeleteChildren
FROM
	HSP_MEMBER m
INNER JOIN
	HSP_OBJECT o ON o.OBJECT_ID = m.MEMBER_ID
WHERE
	DIM_ID IN (SELECT OBJECT_ID FROM HSP_OBJECT WHERE OBJECT_NAME = 'ToBeDeleted')	
	AND o.HAS_CHILDREN = 0

SELECT
	OBJECT_ID INTO #DeleteParents
FROM
	HSP_MEMBER m
INNER JOIN
	HSP_OBJECT o ON o.OBJECT_ID = m.MEMBER_ID
WHERE
	DIM_ID IN (SELECT OBJECT_ID FROM HSP_OBJECT WHERE OBJECT_NAME = 'ToBeDeleted')	
	AND o.HAS_CHILDREN = 1

DELETE
FROM
	HSP_MEMBER
WHERE
	MEMBER_ID IN (SELECT OBJECT_ID FROM #DeleteChildren)

DELETE
FROM
	HSP_MEMBER
WHERE
	MEMBER_ID IN (SELECT OBJECT_ID FROM #DeleteParents)

DELETE
	d
FROM
	HSP_DIMENSION d
INNER JOIN
	HSP_OBJECT o ON d.DIM_ID = o.OBJECT_ID
WHERE
	o.OBJECT_NAME = 'ToBeDeleted'

DELETE
FROM
	HSP_OBJECT
WHERE
	OBJECT_ID IN (SELECT OBJECT_ID FROM #DeleteChildren)

DELETE
FROM
	HSP_OBJECT
WHERE
	OBJECT_ID IN (SELECT OBJECT_ID FROM #DeleteParents)

I’ll have a post on that command at some point, but basically it deletes the dimension from the repository.  Now let’s go look again at our dimension list and make sure that it still shows up:

UpdateCache03

Still there…as expected.  Now let’s try this little query, courtesy of Tjien Lie:

INSERT INTO HSP_ACTION (FROM_ID, TO_ID, ACTION_ID, OBJECT_TYPE, MESSAGE, ACTION_TIME) VALUES (0, 0, 2, -999, 'CACHE RESET',GETDATE())

And let’s take a look at the HSP_ACTION table and make sure that we have the row inserted:

UpdateCache04

We can also check out this table to see if our cache has been updated yet.  As long as the row is here, we know that the cache hasn’t yet been updated.  After a little while, I checked the table again:

UpdateCache05

Now that our table is empty, Planning will tell us that it did in fact refresh the cache:

UpdateCache06

That takes the guess work out of it!  So how about our dimension…is it gone?

UpdateCache07

And just like that, the dimension is gone.  I can make all of the change that I want and I no longer need to restart Planning.  Special thanks to Tjien Lie from The Four Seasons for providing the code.  Information exchange like this is why I love Kscope and can’t wait to see everyone in my home state of Texas next year!  That’s it for now!


The EPM Week In Review: Week Ending July 9, 2016

Welcome to the week after Kscope.  This week brings us a lot of posts about Kscope16 (including mine!).  There were even a few posts not about Kscope.  This is also my first Week in Review that will be posted on Monday rather than over the weekend.  It has been suggested that posting on Monday will increase the overall visibility of the content for everyone, so we’ll see how it goes!

Patches and Updates:

DRM 11.1.2.4.341 and DRM Analytics 11.1.2.4.341 have been released.  This seems to be primarily bug fixes.

Gary also updated his SV ++ enhanced ribbon interface for Smart View.  Definitely worth a look.

New Blog Posts:

This week I have a post about Kscope16 (like everyone else) and also I’m showing off my new Oracle rack.  I’m pretty excited about this rack (I hear you snickering over that statement…get your mind out of the gutter!).

The DEVEPM crew has some nice words about Kscope16.

John Goodwin has posted the fourth and final part to his FDMEE and DRM integration series.  The whole series is definitely worth a read.

Christian has an awesome tutorial on deploying the Planning utilities on a system other than your Planning server.  I’ve often thought that there must be a way to do this as well…glad Christian figured it out.  This opens up a whole host of automation consolidation for Planning customers.

Cameron has part 1 and part 2 of his Kscope16 review in the form of photos.  He has made it through Monday night so far.

Sibin has a good tip on why your Workspace is missing all of your installed applications.  Don’t forget to deploy then to the workspace server!

Opal has two updates this week.  First, she has her all-inclusive single post on Kscope16.  Then she has a post on backups and software updates in the Oracle EPM Cloud.

Celvin has a first look at using attributes in PBCS.  As of the July update, everyone should have these and he shows us how to use them with suppress missing to gain some additional form-building power.  He also has a great post on how to push options to all of your Smart View users during the install.  Have I mentioned lately how awesome Celvin is?  If not…Celvin is…awesome.

Jason joins a group of us last week with more than one blog post.  He has a two-part series on data input with Dodeca: part 1, part 2.  Did everyone get their Dodeca water bottle this year at Kscope?  I have it on a shelf next to my Dodeca water bottle from last year. 😉

Vijay has some excellent content about using Groovy and the REST API’s with HPCM.  I always love a good code snippet.

That’s it for this week!  We’ll see how the Monday thing works out.  Hopefully it leads to more consumption of the great content our community continues to produce!


Kscope16 is over…Kscope17 is coming!

Kscope16Logo

As has been the case for many of my Kscope experiences, Kscope16 was quick and yet still exhausting. The conference, as always, provided seemingly limitless information for Oracle EPM and BI professionals. Even in the small amount of time that I was able to attend, I managed to learn a great many things.

I also had the pleasure of meeting some of my fellow bloggers for the first time and seeing others that I’ve known for years.  Sadly, many of us only get to see each other once a year.  I have a new badge to add to my collection:

2016-07-07 00.03.28

I also received a nice polo as a presenter gift:

2016-07-07 00.03.52

Jake was kind enough to take a picture of me during one of my presentations:

MeAtKscope16

While I didn’t spend enough time this year at the event to really produce a great blog post with lots of pictures, I did have a great, if not brief experience, as I always have.  I’d like to thank everyone at ODTUG for their hard work putting together a great conference.  I’d also like to thank all of the volunteers that take time out of their year to help ODTUG put this event together.  Finally, I’d like to thank all of the presenters that spend countless hours putting together amazing content year after year.

Kscope17LogoReal

With Kscope16 in the history books, it is now time to set our focus on Kscope17.  We will be back at the JW Marriott in San Antonio, Texas.  Those of you who attended Kscope12 may remember this venue.  I’ve been to seven Kscope venues and by far the JW has been my favorite.  It has a great water park for the family and great golf.  I’ll be working hard for the next few months to have a set of abstracts submitted so that I can attend again next year.  Ok…so I’ll probably wait until the day before abstracts are due to actually do this, but I’m going to TRY to do it in advance for a change.  By now everyone is already back home (or back on the road) after safe travels.  See everyone next year!