LP Logo

LobsterPot Solutions™ is proud to be a Gold Competency Partner in the Microsoft Partner Network.

Welcome to

Improving your data story.

LobsterPot Solutions is an Australian SQL Server and Business Intelligence consultancy, offering consultancy and training services. LobsterPot Solutions specialises in the Microsoft Data Platform, including Power BI, SQL Server and Azure, from data resilience to data analytics, to Big Data and IoT, and of course performance tuning, health checks, and more. With experts in both Melbourne and Adelaide, we can help your organisation become more data-driven.

LobsterPot Solutions is a company of firsts. When the Microsoft Partner Network went live, we were the first company in Australia to become a Gold Competency Partner, the first in the world to gain the Gold Competency in Business Intelligence. Since then we have become the first to employ three Australian SQL MVPs, the first company in the whole Asian region to have an APS / PDW trainer on staff, and have been involved in training other trainers in the region.

Performance Tuning

Performance Tuning

Dashboards and Reporting


Proofs of Concept

Proofs of Concept

More Services here.  

News and Events

Two SQL MVP awardees at LobsterPot Solutions

October 2, 2013

LobsterPot Solutions is the only company with two current Australian SQL MVP awardees on staff, after Julie Koesmarno received her first Microsoft Most Valuable Professional award. She joins Rob Farley, who is now an eight-time awardee, as one of only ten SQL MVPs in Australia.


Microsoft gives this award to “exceptional, independent community leaders who share their passion, technical expertise, and real-world knowledge of Microsoft products with others” (mvp.microsoft.com). The award’s status is extremely high, making it one of the most sought-after awards in the Microsoft ecosystem. Julie joins an elite crew in receiving this, no doubt influenced by her involvement in technical communities in roles such as conference presenter, webinar host and community leader.

Recently relocated to Canberra, Julie is about to jet off to present about Business Intelligence at the PASS Summit, this year held in North Carolina. The PASS Summit is the largest SQL Server event in the world.

LobsterPot Solutions appoints Martin Woodcock as General Manager to drive business expansion

September 18, 2013

martinwMicrosoft Gold Partner LobsterPot Solutions – a leading business intelligence and SQL Server tuning/performance management software consultancy – has announced the appointment of Martin Woodcock as General Manager to underpin its expansion throughout Australia.


Martin has spent the past 5 years providing sales management consulting to clients in Adelaide and Melbourne. Previously he held a variety of positions at Oracle Corporation in Europe, from UK Business Intelligence Sales Manager to Sales Vice President, South-East Europe.

Rob Farley, CEO of LobsterPot Solutions, believes that ‘Martin will bring a great deal of business development experience and his previous track record in business intelligence will supplement our business analysis resources. He will also be responsible for project management’.

Martin ‘looks forward to working alongside Rob and his colleagues – complementing their in-depth technical expertise’. His role will be part-time and he will continue his sales management consultancy with other clients through his own company, Two Hemispheres.

For further information, contact: Martin Woodcock, General Manager, LobsterPot Solutions; martin.woodcock@lobsterpot.com.au; 0414 960559.

Microsoft Certified Master in SQL Server

January 5, 2013

As mentioned in that earlier post, Rob took the MCM exams recently.MCM5

Well, he passed!

He now joins a very select few people in the world to be both Microsoft Certified Master, Microsoft Certified Trainer and Microsoft Most Valuable Professional. This makes him and LobsterPot an even better fit for your SQL Server projects.


Microsoft Certified Master is described as the top technical certification available in SQL Server. It is so select that Microsoft even lists all the people who have this certification publicly. You’ll see Rob on that list soon too.

Microsoft Certified Trainer indicates that someone not only has sufficient technical expertise , but has the presentation skills and aptitude to be able to teach in a classroom environment. LobsterPot values these skills, and has three MCTs on staff, all able to deliver the highest quality SQL Server training and mentoring for your organisation. This is an important aspect of consulting – being able to explain concepts clearly and effectively.

The Microsoft Most Valuable Professional award is another exclusive award, given out to people who have demonstrated great influence in the technical community. All the LobsterPot staff are involved in the SQL Server community, which is a way that we contribute back to the SQL Server world. It also means that we have an excellent network of resources, just in case.

To have all three is significant. Congratulations Rob.

MCM exam at Academy IT

January 3, 2013

You won’t find Adelaide’s Academy IT on a list of places you can sit the Microsoft Certified Master exams yet (maybe one day). However, if you choose the remote option for the MCM Lab exam, they can probably help.MCM

Academy IT do training, with easily the best training centre in Adelaide…


…They recently expanded to have a third large training room, with machines that each have 32GB of RAM and dual 23” monitors – one hooked up to the trainer’s desktop and an interactive whiteboard. It’s very impressive, and an excellent option for courses. On top of that, they’re also a great place to take Prometric and Pearson VUE exams – including the Microsoft ones. When the beta exam period came in for SQL Server 2012, LobsterPot Solutions staff used Academy IT’s facilities to take them. Whenever LobsterPot staff need to take any exams, Academy IT is the location of choice.

LobsterPot’s Rob Farley successfully took the SQL Server MCM Knowledge exam (88-970) at the PASS Summit in Seattle. However, the real test with the MCM is the Lab exam. Many people take this exam and fail – passing at the first attempt is rare. Rob took the exam on December 20th, 2012, at Academy IT, using the remote option.

What the remote option means is that the candidate provides their own hardware, with an internet connection and an external web cam. During the session, a web cam broadcasts what’s going on in the room to a proctor, who keeps an eye on both that and what is happening in a LiveMeeting or Lync session, sharing the full screen. It’s like having a test centre without having a test centre. In Rob’s case, the proctor was based in Charlotte, NC, USA.

There are Prometric testing centres venues around the world where the MCM Lab exam can be taken without using the remote option. However, the exam involves a remote desktop session to a computer somewhere else in the world, and there is typically latency hassles. Furthermore, testing centres don’t tend to have a reputation for providing particularly good hardware for their exam machines. Monitors don’t always offer particularly good resolution, and machines are often quite laggy in themselves. We heard stories of a candidate who took the exam just this past week at a testing centre in the US, who had an awful time, losing time from unplanned reboots, an insufficiently large monitor, and more.

Taking the exam at Academy IT won’t stop the latency problem – the signal still needs to get to somewhere in America and back. But Academy IT were able to offer Rob a well-spec’d PC with a large monitor and an easily fast-enough internet connection. As it was, he opted to use his laptop for familiarity of the keyboard, and he had brought his own external webcam too. But Academy IT had excellent facilities to be able to ensure that there was no technical reason for Rob not to pass the exam. They want people to have the best experience in taking exams, and were able to provide an opportunity to take the MCM exam in an excellent environment.

Hopefully Rob passed, and will soon officially be a Microsoft Certified Master in SQL Server. Big thanks also to Academy IT for playing their part in the process.

Replication explained

January 2, 2013

Replication can be a tricky thing for people to get their heads around. At LobsterPot Solutions we’re asked about it on a regular basis. Luckily, our own Rob Farley has put a post together about it, which has been cross posted here.


I spent some time explaining SQL Server Replication to someone recently. They said they hadn’t ever really understood the concepts, and that I’d managed to help. It’s inspired me to write a post that I wouldn’t normally do – a “101” post. I’m not trying to do a fully comprehensive piece on replication, just enough to be able to help you get the concepts.

The way I like to think about replication is by comparing it to magazines. The analogy only goes so far, but let’s see how we go.

The things being replicated are articles. A publication (the responsibility of a publisher) is a collection of these articles. At the other end of the process are people with subscriptions. It’s just like when my son got a magazine subscription last Christmas. Every month, the latest set of articles got delivered to our house. FOUR-FOUR-TWO_JAN-13(The image here isn’t my own – but feel free to click on it and subscribe to FourFourTwo – excellent magazine, particularly when they’re doing an article about the Arsenal.) Most of the work is done by agents, such as the newsagent that gets it to my house.

In SQL Server, these same concepts hold. The objects which are being replicated are articles (typically tables, but also stored procedures, functions, view definitions, and even indexed views). You might not replicate your whole database – just the tables and other objects of interest. These articles make up a publication. Replication is just about getting that stuff to the Subscribers.

Of course, the magazine analogy breaks down quite quickly. Each time my son got a new edition, the articles were brand new – material he’d never seen before. In SQL Replication, the Subscribers probably have data from earlier. But this brings us to look at a key concept in SQL Replication – how the stupid thing starts.

Regardless of what kind of replication you’re talking about, the concept is all about keeping Subscribers in sync with the Publisher. You could have the whole table move across every time, but more than likely, you’re going to just have the changes go through. At some point, though, the thing has to get to a starting point.

This starting point is (typically) done using a snapshot. It’s not a “Database Snapshot” like what you see in the Object Explorer of SQL Server Management Studio – this is just a starting point for replication. It’s a dump of all the data and metadata that make up the articles, and it’s stored on the file system. Not in a database, on the file system. A Subscriber will need this data to be initialised, ready for a stream of changes to be applied.

It’s worth noting that there is a flavour of replication which just uses snapshots, known as Snapshot Replication. Every time the subscriber gets a refresh of data, it’s the whole publication that has to flow down. This might be fine for small pieces of data, it might not for others.

(There are other ways to get started too, such as by restoring a backup, but you should still be familiar with the concept of snapshots for replication.)

To get in sync, a subscriber would need the data in the snapshot for initialisation, and then every change that has happened since. To reduce the effort that would be required if something went drastically wrong and a new subscription became needed, snapshots can be recreated at regular intervals. This is done by the Snapshot Agent, and like all agents, can be found as a SQL Server Agent job.

The middle-man between the Publisher and the Subscribers is the Distributor. The Distributor is essentially a bunch of configuration items (and as we’ll see later, changes to articles), stored in the distribution database – a system database that is often overlooked. imageIf you query sys.databases on a SQL instance that has been configured as a Distributor you’ll see a row for the distribution database. It won’t have a database_id less than 5, but it will have a value of 1 in the is_distributor column. The instance used as the Distributor is the one whose SQL Server Agent runs most of the replication agents, including the Snapshot Agent.

If you’re not doing Snapshot Replication, you’re going to want to get those changes through. Transactional Replication, as the name suggests, involves getting transactions that affect the published articles out to the subscribers. If the replication has been set up to push the data through, this should be quite low latency.

So that SQL Server isn’t having to check every transaction right in the middle of it, there’s a separate agent that looks though the log for transactions that are needed for the replication, copying them across to the distribution database, where they hang around as long as they’re needed. This agent is the Log Reader Agent, and also runs on the Distributor. You can imagine that there is a potential performance hit if this is running on a different machine to the Publisher, and this is one of the influencing factors that means that you’ll typically have the Distributor running on the Publisher (although there are various reasons why you might not).

Now we have a process which is making sure that initialisation is possible by getting snapshots ready, and another process which is looking for changes to the articles. The agent that gets this data out to Subscribers is the Distribution Agent. Despite its name, it can run at the Subscriber, if the Subscriber is set to pull data across (good for occasionally connected systems). This is like with my magazine – I might prefer to go to the newsagent and pick it up, if I’m not likely to be home when the postman comes around. In effect, my role as Subscriber includes doing some distribution if I want to pull the data through myself.

These three agents, Snapshot Agent, Log Reader Agent and Distribution Agent, make up the main agents used for Transactional Replication, which is probably the most common type of replication around. Snapshot Replication doesn’t use the Log Reader Agent, but still needs the other two.

Now let’s consider the other types of replication.

imageMerge Replication involves having subscribers that can also change the data. It’s similar to Transactional Replication with Updateable Subscribers, which has been deprecated. These changes are sent back to the Merge Agent, which works out what changes have to be applied. This is actually more complicated than you might expect, because it’s very possible to have changes made in multiple places and for a conflict to arise. You can set defaults about who wins, and can override manually through the Replication Monitor (which is generally a useful tool for seeing if Subscribers are sufficiently in sync, testing the latency, and so on). Updateable Subscribers end up using the Queue Reader Agent instead of the Merge Agent. They’re slightly different in the way they run, but I consider them to be quite similar in function, as they both involve getting the data back into the publisher when changes have been made elsewhere.

Peer-to-Peer Replication is the final kind. This is really a special type of Transactional Replication, in which you have multiple publishers, all pushing data out at each other. It’s the option that is considered closest to a High Availability system, and is good across geographically wide environments, particularly if connections are typically routed to the closest server. Consider the example of servers in the UK, the US and Australia. Australian users can be connected to the local server, knowing the changes are going to be pushed out to the UK and US boxes. They’re set up in a topology, with each server considered a node. Each server keeps track of which updates it’s had, which means they should be able to keep in sync, regardless of when they have downtime. If Australian changes are sent to the UK but not the US, then US can be updated by the UK server if that’s easier.

Replication can feel complex. There are a lot of concepts that are quite alien to most database administrators. However, the benefits of replication can be significant, and are worth taking advantage of in many situations. They’re an excellent way of keeping data in sync across a number of servers, without many of the server availability hassles associated with log-shipping or mirroring. It can definitely help you achieve scale-out environments, particularly if you consider Peer-to-Peer, which can help you offload your connections to other servers, knowing the key data can be kept up-to-date easily.

I haven’t tried to be completely comprehensive in this quick overview of replication, but if you’re new to the concepts, or you’re studying for one of the MCITP exams and need to be able to get enough of an understanding to get you by, then I hope this has helped demystify it somewhat.

There’s more in SQL Books Online, of course – a whole section on Replication. If what I’ve written makes sense, go exploring, and try to get it running on your own servers too. Maybe my next post will cover some of that.


Roger Noble – Outstanding Volunteer!

November 1, 2012

Every month, PASS awards someone in the global SQL community their Outstanding Volunteer award. In September, it was LobsterPot Solutions’ Roger Noble! Roger wrote about it on his blog, so head over to there and read it. Obviously here at LobsterPot we’re very proud of him, having seen how much effort he puts in, running the BI Virtual Chapter Aus edition, being on the Program Committee for Summit and even creating a Pivot collection of the Summit sessions. Congratulations, Roger!

LobsterPot involvement at the PASS Summit

October 22, 2012

It goes without saying that LobsterPot Solutions is a great supporter of the SQL Server community, and PASS in particular. I thought it would be good to look at how each of the LobsterPot staff is going to be involved at the upcoming PASS Summit.


Rob’s been on the PASS board for over a year now, and is set to continue in this for some time. As a user group leader and current co-ordinator of the PASS’s semi-annual 24HOP event, he has a number of meetings and duties to go alongside the Board Meetings that will be on. If you’re going to be at the event, you shouldn’t have too much trouble finding Rob – he’ll even be involved in prayer meetings in the early mornings.

Roger is speaking this year! His session is on Thursday afternoon, and is on Reporting Services. As well as this, 2012 has seen Roger’s PASS involvement extend into leading the Australian arm of the BI Virtual Chapter. This means he’s been finding speakers for the monthly meetings he hosts, and you might see him wandering around the Summit in a VC leaders’ jacket.

Julie has been a co-presenter at the PASS Summit before, but this year is only giving a Lightning Talk. I shouldn’t say ‘only’, because the pressure in delivering ten minutes of content is at least as high as giving an hour or more. The Lightning Talks are punchy and poignant, energetic and expert-level. Julie’s also leading a First-Timers’ group at the Summit, and speaking at SQL Saturday events in Portland and Olympia either side of the Summit.

Ben and Heidi are both experiencing their second PASS Summit, and like Julie are both involved with the First-Timers. Ben is leading a First-Timers’ group and Heidi has been on the First-Timers’ committee this year, working on a team to find ways to help people at their first summit get the most out of it. The first time anyone experiences a PASS Summit, with 4000 people excited to learn about SQL Server and mix with other SQL Server experts, it can be hard to know what’s important. Being in an Australian time zone can be tricky for organising committees, and Heidi has really gone a long way in helping these people. Ben, Julie and Heidi will have a great time helping people realise the most.

This is Martin’s first PASS Summit. He’s experienced conferences around the world before, including a number of SQL Bits events in the UK. He’s taking part in the First-Timers’ event, but from the other side of the room.

All of us certified in SQL Server 2012

June 23, 2012

MCSE_ExpMicrosoft have introduced some new certifications for SQL Server 2012. Instead of Microsoft Certified Technology Specialist and IT Professional, they have moved to Solutions Associate and Solutions Expert with a new set of exams. All the LobsterPot consultants sat some of the new exams in beta, and all of us are now certified in SQL Server 2012!


Rob sat all seven exams and picked up both MCSE: Data Platform and MCSE: BI. Ashley got MCSE: BI too, while Martin and Roger are both only one exam off getting MCSE certifications. Ben and Heidi also passed exams, and in doing so, Heidi has become a brand new Microsoft Certified Professional!

LobsterPot HTML5 PivotViewer – now Open Source!

April 23, 2012

Hi – Roger here. Two months ago I posted about a project that I’ve been working on during down time here at LobsterPot, a port of the Silverlight PivotViewer control that has been built exclusively on web technologies – HTML5 and JavaScript. If you’re not familiar with PivotViewer it is a visualisation tool that I’ve always felt never got the attention it deserved.

So I put an early version out there to see what people thought – not expecting much. Well I can honestly say that the response has been overwhelmingly positive, I’ve been inundated with requests to finish it off as people were exited to build collections with their own data.

So I’m pleased to announce that the LobsterPot HTML5 PivotViewer is now an Open Source project hosted on CodePlex. You can find it here: http://lobsterpothtml5pv.codeplex.com.


The control is still very much a work in progress and there are still pieces of functionality that is missing. I’ll be updating the documentation over the next few days and the plan is to continue work on the control so that it can render static CXML based collections as well as its Silverlight counterpart.

If you’ve got an existing CXML based collection then please download the source and let me know how well it does/doesn’t work as well as if there are any bugs or functionality that is missing. The LobsterPot HTML5 PivotViewer has been built as a jQuery plugin with extensibility in mind. I’ll be posting more about ways that the control can be enhanced, including how get started extending it to work with other data sources.

Going forward the plan is to have two versions of the control: The open source version that will support static CXML based collections and a paid version that will be enhanced with dynamic collections, tile templates and additional views for mapping, data grids and charts. If you’re interested in having LobsterPot build a collection for you please contact us.

T-SQL in Chicago – the LobsterPot teams with DataEducation

March 26, 2012

In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago.

Data EducationThe big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why.


For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.)

But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help.

This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks.

LobsterPot SolutionsIf you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind.

The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective.

I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run.

LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course.