LP Logo

LobsterPot Solutions™ is proud to be a Gold Competency Partner in the Microsoft Partner Network.

Welcome to

Improving your data story.

LobsterPot Solutions is an Australian SQL Server and Business Intelligence consultancy, offering consultancy and training services. LobsterPot Solutions specialises in the Microsoft Data Platform, including Power BI, SQL Server and Azure, from data resilience to data analytics, to Big Data and IoT, and of course performance tuning, health checks, and more. With experts in both Melbourne and Adelaide, we can help your organisation become more data-driven.

LobsterPot Solutions is a company of firsts. When the Microsoft Partner Network went live, we were the first company in Australia to become a Gold Competency Partner, the first in the world to gain the Gold Competency in Business Intelligence. Since then we have become the first to employ three Australian SQL MVPs, the first company in the whole Asian region to have an APS / PDW trainer on staff, and have been involved in training other trainers in the region.

Performance Tuning

Performance Tuning

Dashboards and Reporting


Proofs of Concept

Proofs of Concept

More Services here.  

News and Events

Replication explained

Replication can be a tricky thing for people to get their heads around. At LobsterPot Solutions we’re asked about it on a regular basis. Luckily, our own Rob Farley has put a post together about it, which has been cross posted here.


I spent some time explaining SQL Server Replication to someone recently. They said they hadn’t ever really understood the concepts, and that I’d managed to help. It’s inspired me to write a post that I wouldn’t normally do – a “101” post. I’m not trying to do a fully comprehensive piece on replication, just enough to be able to help you get the concepts.

The way I like to think about replication is by comparing it to magazines. The analogy only goes so far, but let’s see how we go.

The things being replicated are articles. A publication (the responsibility of a publisher) is a collection of these articles. At the other end of the process are people with subscriptions. It’s just like when my son got a magazine subscription last Christmas. Every month, the latest set of articles got delivered to our house. FOUR-FOUR-TWO_JAN-13(The image here isn’t my own – but feel free to click on it and subscribe to FourFourTwo – excellent magazine, particularly when they’re doing an article about the Arsenal.) Most of the work is done by agents, such as the newsagent that gets it to my house.

In SQL Server, these same concepts hold. The objects which are being replicated are articles (typically tables, but also stored procedures, functions, view definitions, and even indexed views). You might not replicate your whole database – just the tables and other objects of interest. These articles make up a publication. Replication is just about getting that stuff to the Subscribers.

Of course, the magazine analogy breaks down quite quickly. Each time my son got a new edition, the articles were brand new – material he’d never seen before. In SQL Replication, the Subscribers probably have data from earlier. But this brings us to look at a key concept in SQL Replication – how the stupid thing starts.

Regardless of what kind of replication you’re talking about, the concept is all about keeping Subscribers in sync with the Publisher. You could have the whole table move across every time, but more than likely, you’re going to just have the changes go through. At some point, though, the thing has to get to a starting point.

This starting point is (typically) done using a snapshot. It’s not a “Database Snapshot” like what you see in the Object Explorer of SQL Server Management Studio – this is just a starting point for replication. It’s a dump of all the data and metadata that make up the articles, and it’s stored on the file system. Not in a database, on the file system. A Subscriber will need this data to be initialised, ready for a stream of changes to be applied.

It’s worth noting that there is a flavour of replication which just uses snapshots, known as Snapshot Replication. Every time the subscriber gets a refresh of data, it’s the whole publication that has to flow down. This might be fine for small pieces of data, it might not for others.

(There are other ways to get started too, such as by restoring a backup, but you should still be familiar with the concept of snapshots for replication.)

To get in sync, a subscriber would need the data in the snapshot for initialisation, and then every change that has happened since. To reduce the effort that would be required if something went drastically wrong and a new subscription became needed, snapshots can be recreated at regular intervals. This is done by the Snapshot Agent, and like all agents, can be found as a SQL Server Agent job.

The middle-man between the Publisher and the Subscribers is the Distributor. The Distributor is essentially a bunch of configuration items (and as we’ll see later, changes to articles), stored in the distribution database – a system database that is often overlooked. imageIf you query sys.databases on a SQL instance that has been configured as a Distributor you’ll see a row for the distribution database. It won’t have a database_id less than 5, but it will have a value of 1 in the is_distributor column. The instance used as the Distributor is the one whose SQL Server Agent runs most of the replication agents, including the Snapshot Agent.

If you’re not doing Snapshot Replication, you’re going to want to get those changes through. Transactional Replication, as the name suggests, involves getting transactions that affect the published articles out to the subscribers. If the replication has been set up to push the data through, this should be quite low latency.

So that SQL Server isn’t having to check every transaction right in the middle of it, there’s a separate agent that looks though the log for transactions that are needed for the replication, copying them across to the distribution database, where they hang around as long as they’re needed. This agent is the Log Reader Agent, and also runs on the Distributor. You can imagine that there is a potential performance hit if this is running on a different machine to the Publisher, and this is one of the influencing factors that means that you’ll typically have the Distributor running on the Publisher (although there are various reasons why you might not).

Now we have a process which is making sure that initialisation is possible by getting snapshots ready, and another process which is looking for changes to the articles. The agent that gets this data out to Subscribers is the Distribution Agent. Despite its name, it can run at the Subscriber, if the Subscriber is set to pull data across (good for occasionally connected systems). This is like with my magazine – I might prefer to go to the newsagent and pick it up, if I’m not likely to be home when the postman comes around. In effect, my role as Subscriber includes doing some distribution if I want to pull the data through myself.

These three agents, Snapshot Agent, Log Reader Agent and Distribution Agent, make up the main agents used for Transactional Replication, which is probably the most common type of replication around. Snapshot Replication doesn’t use the Log Reader Agent, but still needs the other two.

Now let’s consider the other types of replication.

imageMerge Replication involves having subscribers that can also change the data. It’s similar to Transactional Replication with Updateable Subscribers, which has been deprecated. These changes are sent back to the Merge Agent, which works out what changes have to be applied. This is actually more complicated than you might expect, because it’s very possible to have changes made in multiple places and for a conflict to arise. You can set defaults about who wins, and can override manually through the Replication Monitor (which is generally a useful tool for seeing if Subscribers are sufficiently in sync, testing the latency, and so on). Updateable Subscribers end up using the Queue Reader Agent instead of the Merge Agent. They’re slightly different in the way they run, but I consider them to be quite similar in function, as they both involve getting the data back into the publisher when changes have been made elsewhere.

Peer-to-Peer Replication is the final kind. This is really a special type of Transactional Replication, in which you have multiple publishers, all pushing data out at each other. It’s the option that is considered closest to a High Availability system, and is good across geographically wide environments, particularly if connections are typically routed to the closest server. Consider the example of servers in the UK, the US and Australia. Australian users can be connected to the local server, knowing the changes are going to be pushed out to the UK and US boxes. They’re set up in a topology, with each server considered a node. Each server keeps track of which updates it’s had, which means they should be able to keep in sync, regardless of when they have downtime. If Australian changes are sent to the UK but not the US, then US can be updated by the UK server if that’s easier.

Replication can feel complex. There are a lot of concepts that are quite alien to most database administrators. However, the benefits of replication can be significant, and are worth taking advantage of in many situations. They’re an excellent way of keeping data in sync across a number of servers, without many of the server availability hassles associated with log-shipping or mirroring. It can definitely help you achieve scale-out environments, particularly if you consider Peer-to-Peer, which can help you offload your connections to other servers, knowing the key data can be kept up-to-date easily.

I haven’t tried to be completely comprehensive in this quick overview of replication, but if you’re new to the concepts, or you’re studying for one of the MCITP exams and need to be able to get enough of an understanding to get you by, then I hope this has helped demystify it somewhat.

There’s more in SQL Books Online, of course – a whole section on Replication. If what I’ve written makes sense, go exploring, and try to get it running on your own servers too. Maybe my next post will cover some of that.


Roger Noble – Outstanding Volunteer!

Every month, PASS awards someone in the global SQL community their Outstanding Volunteer award. In September, it was LobsterPot Solutions’ Roger Noble! Roger wrote about it on his blog, so head over to there and read it. Obviously here at LobsterPot we’re very proud of him, having seen how much effort he puts in, running the BI Virtual Chapter Aus edition, being on the Program Committee for Summit and even creating a Pivot collection of the Summit sessions. Congratulations, Roger!

LobsterPot involvement at the PASS Summit

It goes without saying that LobsterPot Solutions is a great supporter of the SQL Server community, and PASS in particular. I thought it would be good to look at how each of the LobsterPot staff is going to be involved at the upcoming PASS Summit.


Rob’s been on the PASS board for over a year now, and is set to continue in this for some time. As a user group leader and current co-ordinator of the PASS’s semi-annual 24HOP event, he has a number of meetings and duties to go alongside the Board Meetings that will be on. If you’re going to be at the event, you shouldn’t have too much trouble finding Rob – he’ll even be involved in prayer meetings in the early mornings.

Roger is speaking this year! His session is on Thursday afternoon, and is on Reporting Services. As well as this, 2012 has seen Roger’s PASS involvement extend into leading the Australian arm of the BI Virtual Chapter. This means he’s been finding speakers for the monthly meetings he hosts, and you might see him wandering around the Summit in a VC leaders’ jacket.

Julie has been a co-presenter at the PASS Summit before, but this year is only giving a Lightning Talk. I shouldn’t say ‘only’, because the pressure in delivering ten minutes of content is at least as high as giving an hour or more. The Lightning Talks are punchy and poignant, energetic and expert-level. Julie’s also leading a First-Timers’ group at the Summit, and speaking at SQL Saturday events in Portland and Olympia either side of the Summit.

Ben and Heidi are both experiencing their second PASS Summit, and like Julie are both involved with the First-Timers. Ben is leading a First-Timers’ group and Heidi has been on the First-Timers’ committee this year, working on a team to find ways to help people at their first summit get the most out of it. The first time anyone experiences a PASS Summit, with 4000 people excited to learn about SQL Server and mix with other SQL Server experts, it can be hard to know what’s important. Being in an Australian time zone can be tricky for organising committees, and Heidi has really gone a long way in helping these people. Ben, Julie and Heidi will have a great time helping people realise the most.

This is Martin’s first PASS Summit. He’s experienced conferences around the world before, including a number of SQL Bits events in the UK. He’s taking part in the First-Timers’ event, but from the other side of the room.

All of us certified in SQL Server 2012

MCSE_ExpMicrosoft have introduced some new certifications for SQL Server 2012. Instead of Microsoft Certified Technology Specialist and IT Professional, they have moved to Solutions Associate and Solutions Expert with a new set of exams. All the LobsterPot consultants sat some of the new exams in beta, and all of us are now certified in SQL Server 2012!


Rob sat all seven exams and picked up both MCSE: Data Platform and MCSE: BI. Ashley got MCSE: BI too, while Martin and Roger are both only one exam off getting MCSE certifications. Ben and Heidi also passed exams, and in doing so, Heidi has become a brand new Microsoft Certified Professional!

LobsterPot HTML5 PivotViewer – now Open Source!

Hi – Roger here. Two months ago I posted about a project that I’ve been working on during down time here at LobsterPot, a port of the Silverlight PivotViewer control that has been built exclusively on web technologies – HTML5 and JavaScript. If you’re not familiar with PivotViewer it is a visualisation tool that I’ve always felt never got the attention it deserved.

So I put an early version out there to see what people thought – not expecting much. Well I can honestly say that the response has been overwhelmingly positive, I’ve been inundated with requests to finish it off as people were exited to build collections with their own data.

So I’m pleased to announce that the LobsterPot HTML5 PivotViewer is now an Open Source project hosted on CodePlex. You can find it here: http://lobsterpothtml5pv.codeplex.com.


The control is still very much a work in progress and there are still pieces of functionality that is missing. I’ll be updating the documentation over the next few days and the plan is to continue work on the control so that it can render static CXML based collections as well as its Silverlight counterpart.

If you’ve got an existing CXML based collection then please download the source and let me know how well it does/doesn’t work as well as if there are any bugs or functionality that is missing. The LobsterPot HTML5 PivotViewer has been built as a jQuery plugin with extensibility in mind. I’ll be posting more about ways that the control can be enhanced, including how get started extending it to work with other data sources.

Going forward the plan is to have two versions of the control: The open source version that will support static CXML based collections and a paid version that will be enhanced with dynamic collections, tile templates and additional views for mapping, data grids and charts. If you’re interested in having LobsterPot build a collection for you please contact us.

T-SQL in Chicago – the LobsterPot teams with DataEducation

In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago.

Data EducationThe big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why.


For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.)

But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help.

This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks.

LobsterPot SolutionsIf you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind.

The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective.

I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run.

LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course.


SQL Community events in Melbourne

LobsterPot Solutions is a company that is fully dedicated to supporting the SQL Server community.


Its owner, Rob Farley, has supported conferences such as SQLBits and PASS, and has had significant involvement in local user groups, including running the Adelaide SQL Server User Group for over six years. He has even been appointed as an advisor to the PASS Board of Directors, and is currently running for election to become a director. This mindset is common in all LobsterPot staff, who are all recognised faces at conferences both around Australia and in the rest of the world.

With this in mind, LobsterPot Solutions is proud to be the first sponsor of the SQL Server Social events in Melbourne, and to help coordinate the events. The SQL Server community is one of the strongest technical communities around the world, and events such as SQL Server Social help strengthen it even further. If you haven’t been to one, please make a point of adding it to your diary and getting to know more of your fellow database professionals.

To find out more about the next SQL Server Social event, visit the SQL Server Social blog.

World Series of PASS

I’m sure Americans understand the joke that is the “World Series” of baseball. Everyone else rolls their eyes and knows that this ‘world’ means North America.


Some people feel similar with PASS, unfortunately. The North American Conference and the Global Summit are one in the same, and the 2013 Summit is to be held in Charlotte, North Carolina, to bring it “closer to database pros in the Eastern US who may not have been able to make the cross-country trip in the past.” The 24 Hours of PASS events have been split into two sets of 12, because numbers show that fewer people attend the sessions when America sleeps. “Two Days of Free SQL Server Training” is actually two Nights where I live, and most other people outside Europe and the Americas. (I’m actually on the organising committee for 24HOP, and am hoping to see this change back again soon)

And so it’s very exciting to see that there are people within PASS who are trying to Globalise the organisation more. And that’s GlobaliSe with an eSs, not a Zed. Microsoft’s Mark Souza and the current PASS President Rushabh Mehta are among those keen to make this happen, and I was recently invited to Stockholm for some meetings with representatives from around the world to discuss how to make PASS more global (which unfortunately, also means less US-centric).

It’s an interesting challenge – trying to think how to cater best for people who are essentially a minority in an organisation. It’s clear that the majority of PASS members are in the US. This is no surprise – it’s also the place that more SQL Server licenses are sold. But there is also a large portion of the SQL Server world which is not being targetted by PASS. The US market is comparatively low-hanging fruit, but there’s something to be said for reaching further afield. The US community will still benefit from most initiatives that are designed to cater better for non-US folk, but the non-US folk are less likely to benefit from US-ones.

Part of the challenge will be around metrics. If PASS measures success by the number of members, then it makes sense to focus on the more-populated areas of America. On the other hand, if success is measured in other ways, perhaps even ones that are more qualitative than quantitative, then globalisation can make better sense.

The meetings in Stockholm were very interesting, and it was good to hear opinions from other parts of the world, not just Americans (or Australians). But one of the most interesting things happened towards the end of it.

Three of us were invited to be non-voting advisors on the PASS Board of Directors – Raoul Illyés (Denmark), James Rowland-Jones (UK) and me (Aus). These appointed positions, only for a year (based on what the board is allowed to do). See page 4 of http://www.sqlpass.org/Portals/0/PASS%20Bylaws%20–%20June%202009.pdf, "The Board may also choose to appoint any number of non-voting advisors to the Board by a majority vote for a period of up to one year each."

Yes, these are currently non-voting positions that we’re taking on. But we do still have the ability to go through the election process to earn the ability to vote on board issues. I’m not sure we really need it though – this appointment will give the three of us (and in turn, our respective regions) a decent voice, even if the final decision is left to the existing people. In time, who knows how we can influence PASS and how PASS can influence the SQL world.

Finally – this isn’t about PASS taking over the world. It’s about PASS being better placed to support the worldwide community. This will probably involve supporting non-PASS events (such as SQLBits) too, and possibly even reducing the focus on requiring communities to sign up as official PASS chapters. A lot of things might change, but the hope and focus is on making PASS a better vehicle for supporting the worldwide SQL Server community.


PASS Summit pre-conference seminar by Rob Farley

PASSSummit2011The annual PASS Summit in North America is the largest SQL Server event in the world. There is no other event on its scale anywhere. So it is with great honour that LobsterPot’s Rob Farley will be delivering a pre-conference seminar at the 2011 event, held in Seattle from October 11-14.


The seminar that Rob will be delivering is a slightly-modified version of the one that he delivered at the SQLBits conference in the UK in 2010, and is called “Fixing Queries Using Advanced T-SQL Constructs”. The details of both it and the other LobsterPot submissions for the PASS Summit can be seen at http://lobsterpot.com.au/lobsterpot-submissions-for-sqlpass.

The official announcement from PASS about the chosen seminars will be around June 1st, and the announcement about breakout sessions is expected to be a few weeks later.

New hires, new site, new location

Exciting times for LobsterPot – Martin & Heidi joining the team, a new website (lobsterpot.com.au), and a Melbourne presence!Melbourne Skyline


In March, Martin Cairney joined the team. Martin is a long-time member of the local Adelaide SQL Server community, and has been a friend of mine for some years now. He has spoken at the Adelaide SQL user group before, and when 2007 saw him move to the UK for a while, I wished him all the best. Martin went on to speak at the first SQLBits event, and became a regular at those conferences until he came back to Adelaide last year. He also spoke at SQL community events around the UK, establishing a good profile there. It was definitely good to get him on board a couple of months ago.

This month, Heidi Hasting brings the number of people in the company to seven. Heidi has worked with some of our staff before, and I see her having a massive future with the company.

You can see the new website for yourself! It’s clearly much better than the old site, which was little more than a placeholder. The new one has the attention of a proper web designer (instead of just being a bit of HTML thrown together by me), runs on the WordPress platform, and features a picture of Glenelg jetty, taken by my lovely wife. Most Adelaide beaches are just amazing, and it’s nice to be able to show one off on the website.

But perhaps the biggest news – we have a Melbourne presence now!

Martin has moved to Melbourne, and we’re picking up clients there. I imagine I’ll make some trips over there myself in coming weeks/months, hopefully speak at the odd user group, visit clients, and maybe see some friends and family as well.

Expanding out of Adelaide has been on my mind for some time. While Adelaide is a great place to do business, and everyone should consider moving here, it’s small compared to many places (only the fifth largest city in Australia). Establishing LobsterPot in other places makes good sense. Melbourne is a great city – I used to live there, and my brothers and their families still do. We’ve had the occasional Melbourne-based client before, but never enough to consider having a consultant based there. We do have a consultant there now, and we are well-and-truly open for Melbourne business.