Today consisted of A LOT of spread sheets. Never have I spent so many hours organizing, colour coding and calculating in excel. I managed to sort my way through and get my head around some of the spread sheets guy sent me. One I found particularly useful was his ticket counts spreadsheet, which has graphs of paging per month during 2016, and ticket count data. I begun by totaling the hours spent after hours on each ticket for the 7 months covered by the spreadsheet, giving me my first figure.
From there, I used the information provided by Adam: tickets and hours spent on the tickets for the last 60 days. I colour coded the different alerts, then totaled the hours spent on each alert. This gave a second figure, the total hours spent after hours on each alert in the last 60 days.
Once I had the fore mentioned figures, I added them to my Copy of Alerts spreadsheet, which is the one that will serve as my base document throughout my project work. From here I should be able to organize the pageable alerts based on the time spent on each, allowing me to assign a priority (1 to x) for each.
This morning I begun by ordering the alerts in my Copy of Alerts spreadsheet based on hours spent after hours from 01.012016 to 01.07.2016. I had hoped the hours spent during 2016 would correspond with the hours in the last 60 days, but this was not true across all of the alerts. This made it slightly harder to prioritize the alerts, but I decided to use my judgement and will run the results past Adam next time we have a meeting.
My next task was to go through another document provided by Guy. The template alerting and resolution process doc. This is a document that was created by Guy, outlining the top level steps taken when resolving alerts. He is hoping to eventually have these built into the template in order to reduce the amount of time spent on the alerts after hours. For now however, it provides a useful tool for me. It is something I can use as a comparison when shadowing the DBAs.
For similar reasons, I also copied the SharePoint alert resolution pages into a document. It will again become a tool to be used for comparison when shadowing the team, and a good resource to gain initial understanding of the ‘text book’ resolution process provided to the DBAs by SQL Services.
At 10.30 I joined Namisha, Neil and Nadine while they had a meeting to mark their SMART assessments. Although I did not have to actually complete the smart assessments, Adam thought it would be good for me to sit in on the meeting if i felt so inclined. I’m glad I attending the meeting as I learnt a lot form internal tools to customer service.
I then spent the remainder of the afternoon by starting to watch the SQL Server transact SQL (basic data retrieval) videos. It sounds like the videos will cover a lot of what I have learnt in my classes at NMIT, so I will see how they go. The videos will also take me a wee while to finish watching as there are almost 140 videos. So today is a great day to get started while I have the time.
The videos covered how to set up a transact-SQL learning environment, creating a basic select statement, writing a query that accesses multiple data sources, and using functions to meet application and business requirements. As I mentioned, some of these videos discuss things that I already know, but they also covered a lot of things that I didn’t know. So I still learnt a lot.
First thing this morning, Guy approached me and asked if I would like to meet with him at some stage today to go over what he had given me last week, and to have the chance to ask any questions I might have. I spent the morning going through what I had done so far, and noting down any questions I might like to ask.
After I had noted down all of the questions I could think of regarding Guys data and the alerts, I continued watching the SQL Server transact SQL (basic data retrieval) videos.
There are, however, only so many videos you can watch before it becomes hard to concentrate. So to take a break from watching videos, I sat with Neil and watched/helped him write a report. This was my first experience of actually writing a report, and it was quite interesting to look into the matrix and try to establish what was going on with all of the servers. After we completed that report, it was back to videos.
So, after meeting with Guy and having a chance to answer all of my question and clarify a few things, the priority of my alerts has changed a wee bit. Guy is working on lowering the amount of noise the on call DBA’s are getting, which means less after hours pages for them. Together we looked at the SLA documentation and established that a majority of the page able alerts in the template, should not be page able based on what is covered by the SLAs. We managed to narrow the list down to 6 alerts that should be page able based on the SLA documentation (narrowed down from the 15 as per the template). Because of this, Guy said I should make those my first priority even though they aren’t the alerts which have had the most time spent on them. This is something I will need to run past Adam as he wanted me to prioritize the alerts based on the amount of time spent for each ticket – so things may change again after speaking with him.
However, Guy also said he thinks I should be able to get through all of the 15 pageable alerts anyway, so the priority or order in which I tackle them shouldn’t be too much of a concern. I’m glad someone has such faith in me!
As it stands, at the end of week 3, I’m happy with my progress and am eager to start shadowing some of the DBA’s and learning about their individual process and steps to resolution. I have now spent a total of 85 hours at SQL Services – so I’m 28.333333% of the way there!