My OOW Experience, the Prequel
On the first or second year after I became a DBA (somewhere in the year 2000, give or take), I heard about this magical event that is taking place in the Oracle realm. All the top Oracle DBAs and professionals in the world (and maybe a couple of unicorns, I’m not sure) are gathering in a small but beautiful village by the name of San Francisco – and talk about databases. I found this story unbelievable – but I knew I must go there.
I was about 20 at the time, so going around the world for a conference was out of the question – but the idea of a big convention sounded awesome to me. When I went to my first local Oracle event, I was so amazed and knew that no matter what – I will someday go to the Oracle Open World (OOW).
Years gone by and I evolved – from a programmer and DBA to a senior DBA, team leader, senior consultant, service director, and even CTO for a consultancy company – but I never had the chance to go to that Open World event.
How Did I Become a Public Speaker: My First Time
Last week, my dear friend Roni Vered Adar told me about a pod cast by Kendra Little entitled “How to Level Up Your DBA Career (Dear SQL DBA)“. Roni told me that Kendra talked about the importance of public speaking and wanted to know what I think.
I said it’s a great idea, so Roni asked if I could tell her a little about how I started speaking in public, and about my first time to do it. I told her that’s a long story but I’ll blog on it if she wants. She did, so here we go…
Spark SQL and Oracle Database Integration
I’ve been meaning to write about Apache Spark for quite some time now – I’ve been working with a few of my customers and I find this framework powerful, practical, and useful for a lot of big data usages. For those of you who don’t know about Apache Spark, here is a short introduction.
Apache Spark is a framework for distributed calculation and handling of big data. Like Hadoop, it uses a clustered environment in order to partition and distribute the data to multiple nodes, dividing the work between them. Unlike Hadoop, Spark is based on the concepts of in-memory calculation. Its main advantages are the ability to pipeline operations (thus breaking the initial concept of a single map-single reduce in the MapReduce framework), making the code much easier to write and run, and doing it in an in-memory architecture so things run much faster.
Hadoop and Spark can co-exist, and by using YARN – we get many benefits from that kind of environment setup.
Of course, Spark is not bulletproof and you do need to know how to work with it to achieve the best performance. As a distributed application framework, Spark is awesome – and I suggest getting to know with it as soon as possible.
I will probably make a longer post introducing it in the near future (once I’m over with all of my prior commitments).
In the meantime, here is a short explanation about how to connect from Spark SQL to Oracle Database.
Update: here is the 200 long slides presentation I made for Oracle Week 2016: it should cover most of the information new comers need to know about spark.
Read more →
SQLcl New Version and Other Big Stuff
I’ve been meaning to write about SQLcl for quite some time now – a lot is happening in the SQLcl world – but I hadn’t had the chance due to my very busy schedule.
Since some bigger things had happened recently, I feel it is a good opportunity to write about it. I promise to take some time to write (and maybe even video) some guides for SQLcl in the near future.
Okay, enough with the apologies, let’s see what is new.
Using External Table on Windows RAC ACFS
One of my customer is using Oracle RAC (11.2.0.3) on Windows 2012. This is might not be the most ideal setup I’ve ever seen, but it works and we’ll leave it by that.
One of the side effects of using Oracle RAC on Windows is that some of the basic things I am used to do when using RAC on Linux (for example) is behaving differently when it comes to windows. Here is a quick example for that.
I was asked by the customer to create an external table using a fixed-record file. This should be easy enough, right? Well, yeah – but we need to consider that we might connect to the database from either node so we need to put the file on a shared storage.
The customer is using ASM for his RAC so what better solution we have than using ACFS configured on top of the ASM?
Oracle 12c Caching and In Memory Databases
A few weeks ago, I was asked to give a private session about In-Memory database vs. traditional persistent databases. I created an hour-long session explaining the basics of database systems, how in-memory systems work, and when to use each of the systems.
One of the questions I got (and answered) was about persistent (regular) database cache mechanism – and I felt this is a good opportunity to write about Oracle 12c new feature – the Force FULL database cache.
In my session, I gave a long explanation about several hybrid solutions (such as the MySQL memory storage engine, and Oracle 12c database In-Memory option) but this post will focus on the Force Full database cache, which will be explained in the second part of the post.
Enjoy!