Method 1 – TSQL SELECT ConnectionProperty('net_transport') AS 'net_transport' , ConnectionProperty('protocol_type') AS 'protocol_type' , ConnectionProperty('auth_scheme') AS 'auth_scheme' , ConnectionProperty('local_net_address') AS 'local_net_address' , ConnectionProperty('local_tcp_port') AS 'local_tcp_port' , ConnectionProperty('client_net_address') AS 'client_net_address' , ConnectionProperty('physical_net_transport') AS 'physical_net_transport' Local_net_address was the field that I required. This post was inspired by - https://www.sqlservercentral.com/forums/reply/1519373 Further information on ConnectionProperty can be found here. Method 2 – DBA Tools Test-DbaConnection <InstanceName> see -> https://docs.dbatools.io/#Test-DbaConnection The output includes a property IPAddress which is what I wanted. If you have not looked at DBATools then you should, it is an amazing, open source project, and a genuine time saver. Are there any other methods which I have not thought of please let me know.
Since this one caught me out and a few other people it seemed like a good idea to blog about this. Sometime ago the Azure portal got a make over, which l have say make the portal look very pretty. Whilst setting up a proof of concept for PowerBI l thought it was also a good time to look at setting up an Azure SQL database. The last time l did this was in the previous portal. So l duly set up the database, one of the last steps was to allow the IP address from my work organisation access to the database. So l goggled and found this page - https://azure.microsoft.com/en-gb/documentation/articles/sql-database-configure-firewall-settings/ The Wrong Way Open the new style Azure Portal Click on browse (1) From the list of blades click once on SQL databases (Figure 1) Figure 1 - Opening the SQL databases blade Click on the 'SQL databases' blade this will open a list of the all the SQL databases that are in your account (Figure 2) Figure 2 - List of SQL databases showing area where to click to show settings Clicking on the ellipsis will cause a new menu window to appear (Figure 3) Figure 3 - Clicking on the ellipsis to get the settings link Click once on the settings link this will then open the settings window (Figure 4)
Open the new style Azure Portal Click on browse (1) From the list of blades click once on SQL servers (Figure 5) Figure 5 - Opening the SQL servers blade Click on the SQL Servers blade and this will open a list of all the SQL servers in your account (Figure 6) Figure 6 - List of SQL servers Move the mouse over the row which is the SQL server that require to see the settings for. Click once with left hand mouse button the selected SQL server in Figure 6 this is SQL server 'kjsgbci03i'. This will then open Settings window on the far right hand side (see Figure 7) Figure 7 - Settings page for SQL server Click once on the firewall row and this will open the 'Firewall settings' window (see Figure 8) Figure 8 - Firewall settings window
Final Note Splitting the settings for SQL servers and SQL databases, does make sense, the same logic is used within SSMS. That said in the old style portal, the link to the page for setting allowed IP Addresses is under 'SQL databases' (see Figure 9). Hence why l must have looked at SQL databases blade first. At least that's my excuse and l am sticking to it ;-> Figure 9 - SQL server page with link to Manage allowed IP addresses (Firewall rules) in the old style Auzure portal.
So what am I going to show in the following blog posts?
So that’s what I am going to show, next let’s look at what is required to do it.
Ingredients
First download and install Azure Data Studio you can download the program from here.
Once you have installed Azure Data Studio, open the application. In Azure Data Studio in the menu find ‘File’ and click it, from the menu select ‘New Notebook’ see Figure 1 below.
This will open a new notebook (yippee!!) this might not sound very exciting yet, however it is! When a new notebook opens the Kernel must be set. The way that I think about this that it sets the language which will be run in the notebook, and will default to SQL. What we want run is Python v3. From the list of Kernels available selected ‘Python 3’, this will set the language that will be run in the notebook.
Figure 2 – selecting the Kernel (programming language) that will be run in the notebook.
Once ‘Python 3’ has been selected and if Python is not set up and installed, then Azure Data Studio will prompt you to set up and configure Python for Notebooks. A screen will open as we can see in Figure 3. For this blog post I accepted the default location and clicked on the ‘install’ button.
Figure 3 – Install and configure python for use in Azure Data Studio
If everything has gone to plan, then you should see something that looks like Figure 4.
Figure 4 – installation of Python going as planned
Installing of Python can take sometime so it might be good idea to get a hot beverage or do something else till it is finished installing.
Figure 5 – Installation of python is now completed successfully
In sessionize.com it is possible to create different API’s to output data, with this example the data is outputted as JSON. It is possible to select different parts of the data to be outputted, in this example ‘All Data’ is selected. Selecting the data from sessionize.com is beyond the scope of this blog post, it is very easy to do though.
In figure 6 the last step is to get the URL to be called in the code, this can be seen in Figure 6 below.
Figure 6 - API /Embed screen in Session.com for Data Scotland 2019.
In figure 6a (yes I forgot to include this till a later edit) is the columns that are outputted from Sessions.com for the API endpoint used.
Figure 6a - Settings for Available API endpoint used in this blog post.
Ok enough setting up lets write some code. To get access to other libraries in Python, the command that is used is import <library name>. In this example there are four libraries which are imported to be used. If you run the code shown in figure 7 you might get the error message shown.
Figure 7 – Error message if the package for the library being imported is not installed.
If you do see this error message then all you need to do is install the required package. In figure 7 at the top left hand side there a button titled ‘Install Packages’. Click on that button and the terminal window will open (see Figure 8). The command that installs the library ‘pyodbc’ is ‘.\python.exe - m pip install pyodbc’, type the command into the terminal window and press enter.
Figure 8 – Entering the command to install the ‘pyodbc’ package in the terminal window.
Hopefully the ‘pyodbc’ package will install without any challenges. If like me you are not so lucky and you get the error message shown in Figure 9. Then this is quite easy to fix.
Figure 9 – Error message stating PIP (Pip Installs Packages) requires to be upgraded to install ‘pyodbc’ package If you get the error message shown in Figure 9 then enter the following command at the prompt ‘.\python.exe - m pip install –upgrade pip’. If everything goes well you will see a message like the one shown in Figure 10.
Figure 10 – Successfully upgraded PIP to v 18.
Once the new version of PIP has been installed restart Azure Data Studio. Then open a notebook select Python 3 as the kernel language then click on the ‘Install Packages’ and install ‘pyobdc’ library (see Figure 8). Once ‘pyobc’ has been installed, it is now time to run the Python script
The Python Script will do the following
1 - call the API call and get the Json string returned is this into a dict Object which is then cast to a string object.
2 - open a connection to a SQL database run SQL script to create table if does not exist
3 - insert Json string into field in the table
Below is the Python script that is used. Much of the credit must go to the various websites which I have add references to in the script. In figure 10 we can see the script that is used. All that is require to change, is URL for the sessionize.com API, user credentials in the connection string. Otherwise this is the script is what I used.
Figure 11 - Python script in Azure Data Studio Notebook to import Json in SQL server 2016
The Azure Data Studio Notebook that is shown in Figure 11 can be downloaded from here.
In the next blog post we will look at how work with the Json data in SQL Server.
Figure 1 – The function ISJSON() returns 1 showing the data in the field [Data] is valid JSON
Now the JSON data is in the SQL database lets see about doing something useful with it. In SQL Server 2016 there was a number of new functions added that allow the querying and manipulation of JSON data. Having done some research, I found this blog post - https://visakhm.blogspot.com/2016/07/whats-new-in-sql-2016-native-json_13.html. Using code in this blog post I was able to extract the data from the JSON string supplied by the API from the sessionise.com website.
Before querying the data I need to explain one concept which is crucial for extracting data from structured JSON. In the example in Figure 1 below the path of the ‘title’ key value pair is as follows
Sessions.0.title this would have the key value pair 'title: “DAX Gotchas”' see Figure 2
Figure 2 – JSON data showing the sessions node and the first speaker node.
In the JSON object that was returned from sessionize.com API there are a number of nodes for each session. Starting with the number 0 through to 29 within each node there are a number of Key : Value pairs eg 'id : “117469”'. The path, nodes and arrays eg []speakers, and []categoryItems are what TSQL is going to extract values from. Enough with all that waffling about JSON objects, lets write some proper TSQL. In the next example we are going to use a function called OPENJSON(). This is only available in SQL 2016 or upwards. Using OPENJSON() in this example we are going to provide two arguments, @AllJson which contains the JSON object and must be datatype NVARCHAR(). Next is the path, the way I think about the path, is it specifies the node or array that I want to return from the @AllJson. The other function that we will use is JSON_VALUE(). This function also accepts two parameters, and an expression which is a variable or field name containing JSON data. The other one is path, the way I think about the path is it specifics the node or array that I want to return from the JSON data (yes I said that already just wanted to see if you are paying attention ;->). That’s a lot of words so let's look at some TSQL in Figure 3 below
Figure 3 – The JSON data from the sessions node returned as a result set in SSMS
When we look at Figure 3 we will notice that the first row of the data is the same as the data shown in Figure 2. In essence the FROM OPENJSON(@AllJson, ‘$.sessions’) is returning a dataset which consists of three fields namely Key, Value, and Type. The field Value contains the JSON object for all 30 session nodes. Next the JSON_VALUE() function takes the Json and extracts the value for one key pair. This is done by specifying the Key value for the 'Key:Value pair'. So in the case of title the path ‘$.title’ is supplied for the path parameter. Since there is only one 'Key:Value' pair where the Key = title, the value is return from the JSON_VALUE() function, and returned in the field ‘SessionTitle’. Looking at Figure 2, there is a Key:Value pair in the []speakers array. So sessions[0].id.value is “1174469”, the corresponding lookup value is speakers[7].sessions.value is “117469”. The two values are their locations in the JSON object are shown in Figure 4 below.
Figure 4 – Showing the lookup values for both sessions to speakers and vice versa.
So we know that we want to get access to the data in the []speakers array as this contains the list of speakerID’s for each session. How is this done? Well I found an answer in this blog post - https://visakhm.blogspot.com/2016/07/whats-new-in-sql-2016-native-json_13.html. Below in Figure 5 is the TSQL and result set.
Figure 5 – Updated query to return the speakerID from the []speakers array.
All we have done in the query shown in Figure 5 is to add a CROSS APPLY with a simple select statement. Now the speaker ID is returned, note that if there is more than one speakerID, such as in the case of sessionID 117615 (which has two awesome speakers). In which case the query returns two rows, returning a different speakerID for each, which is just what we wanted. Next let's have a look at returning data for the speaker's node. Below in Figure 6 the TSQL to return some data from the []speakers array.
Figure 6 – TSQL query to return data from the []speakers array
Looking at the query inside the CROSS APPLY
SELECT Value FROM OPENJSON(s.Value, '$.links') WHERE Value LIKE '%Twitter%' There are a couple things that are worth looking at. First it is possible to use a WHERE clause on the columns returned by the OPENJSON() function. The reason for using the WHERE clause is that the links node can contain more than one type of link. During development some of the speakers had a LinkedIn profile, which they then removed 🙁. So by now I am sure you are saying “show me the money”. After some work I created a query which extracts, the session, speaker and room information. Then returns it as a single result set as shown in Figure 7 below.
Figure 7 – Result set with Session, Speaker and room details
If you want to have a try yourself and play with the code then you will find
TSQL source code is in this Azure Data Studio Notebook is here
Python Code is in this Azure Data Studio Notebook is here
If you have not run the python code to import the data to import the data, then I have created a azure data studio notebook, containing the code to create the database and other tasks. The notebook can be found here.
Last, but very much not least why did I spend some much effort to get all the data out of the sessonize API? The end goal was to supply the data to SQL Server Report Builder (download from here https://www.microsoft.com/en-us/download/details.aspx?id=53613) . This standalone tool will allow you to build an SSRS report. Using this tool I created a report which when you run the report outputs pages that look like the one shown in Figure 8 below.
Figure 8- Data finally published on the SSRS report
Ok long story, short. I downloaded and installed SQL server 2016 CTP3 on Windows 10 Pro X64 Virtual machine which was set up for testing purposes. Yes l know that was a silly idea, what the heck living dangerously is fun sometimes! The OS was a standard install and it's standalone as in not connected to a domain or clever stuff. When propmted to enter a user account, l used a personal Microsoft account l have. Next l installed SQL Server 2016 CTP nothing fancy and choose the native SSRS install (If you just interested in list of steps l followed they are at the bottom of this posting)
Next logged in as the user account that was used during set up, this is Microsoft account. Now l wanted to play with SSRS, so open the default browser (Microsoft Edge) and entered the url “localhost/reportserver” and waited. Next got a dialog box asking me to enter my user credentials
So l entered my Microsoft account details, and eventually ended up with the error message below. Yes l know that I should have remembered, an admin account requires to grant permission on Report Manager to the account you are connecting with “Doh!”. Next step was to see if l could run Microsoft Edge (the default browser) as Administrator, no that was not possible. As can be seen from the screen shot below it was possible to run command prompt as Administrator, which did not make sense, but not Microsoft Edge.
Again Google the rescue and found this page - http://www.virtualizationhowto.com/2015/07/windows-10-edge-opened-builtin-administrator-account/. Followed instructions open Microsoft Edge, restarted Windows was able to run Microsoft Edge as Administrator, "Yippee!!!" Typed in the url “localhost/reportserver”, then eventually got the following error message. During earlier search found this page http://www.windowscentral.com/how-find-internet-explorer-windows-10-if-you-really-need-it. I had tried this before making the change in this page http://www.ghacks.net/2014/11/12/how-to-enable-the-hidden-windows-10-administrator-account/. So entered ‘Internet ‘ into the search box and selected to run Internet Explorer as Administrator. Success !!!!!! (see screenshot below) As the saying goes “ a long way for a short cut”, however it works! So l set about making the relevant changes in report manager, setting up the Microsoft user account l normally log in with as content mgr etc. That all seemed to go as expected. Next switched accounts back to the Microsoft account and thought l will just open Microsoft Edge. Since the account has been set up and l do not need to run it as Administrator. This did not work l got the error message below. So l typed into the search box 'Internet' and Internet Explorer was one of the applications l was able choose to run, the result can be seen below. It worked as expected. Is there an easier way? Then please let me know, this was not the most fun learning journey, and I’m always open to learn. So what steps did l follow? 1) http://www.ghacks.net/2014/11/12/how-to-enable-the-hidden-windows-10-administrator-account/ Used instructions to enable the administrator account and set the password for the Administrator account. Switched accounts from Microsoft account to Administrator account
2) http://www.virtualizationhowto.com/2015/07/windows-10-edge-opened-builtin-administrator-account/ Used these instructions to make relevant change to the security policy 3) http://www.windowscentral.com/how-find-internet-explorer-windows-10-if-you-really-need-it Used these instructions to find Internet Explorer and open using "Run as Administrator" option 4) Open SSRS report manager (running as local administrator), set the relevant permissions for the Microsoft account 5) Switched accounts from Administrator account, to Microsoft account. Searched for Internet Explorer then browsed to "localhost/reports" and was able to see SSRS report manager.
One of the attractions of being a programmer was that l could find ways to make routine tasks easier and quicker. As our company and clients have discovered the joy of Power BI also the reports l have created. There are more clients are asking for reports. Some of these are the same reports to ensure everyone has the same view of their data. Scenario There are two databases l populate with data, one of these is on my machine, the other is an Azure database. As you would expect l use the local copy to test out various changes to the database. Then using the Redgate tool SQL Compare l can script any changes and apply them to the Azure database. It is possible to change the data source for each dataset easily using the following method. From the main window, on the 'Home' tab click once on the 'Edit Queries' button see Figure 1 Figure 1 - showing the location of the 'Edit Queries' button
This will open the 'Edit Queries' window now to see where the data source for a selected dataset. First select a dataset in the example shown in Figure 2, it is 'DimDate'. Next click on the 'Advanced Editor' button see figure 2.
Figure 2 - Showing location of 'Advanced Editor' button When the Advanced Editor window opens it will look something like the one shown in Figure 3 below. Figure 3 - Advanced Editor window, with M (Power Query Formula Language) query showing the Database Server Name and Database Name The syntax of the statement is relatively simple. In this case we are looking at the Sql.Database data function, the syntax is simple and looks like this -: Source = Sql.Database( "Name of Database Server", "Name Of Database") This is the text shown in Figure 3 let Source = Sql.Database("localhost", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate
To change the data source to point at the Azure database the only changes that have to made are as follows
let Source = Sql.Database("AzureDatabase", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate Job done. This was nice until you have to do it several times, and personally I got bored then wondered if there was a better way? I am glad you asked if there is a better way let me show you. Having seen some examples where the data source was an excel spreadsheet l was aware that it was possible to supply parameters to change the data source. From one spreadsheet to another one. The first step is to set up the parameters. Open the Query Editor window, click on the 'New Sources' button from the list click on 'Blank Query' (see Figure 4)
Having set up the DatabaseServer dataset (parameter value), repeat the process and create a DatabaseName dataset (parameter value). Figure 6 shows what you query editor window would look like if you entered the same details shown in this example.
* DatabaseServer - containing the value 'AzureDatabase' or the name of the database server you are going to connect to. * DatabaseName - containing the value 'DBData' or the name of the database you are going to connect to.
If we look at the advanced properties of the dataset 'DimDate' the data source would look like the one shown below. let Source = Sql.Database("AzureDatabase", "DBdata"), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate Next step is to replace the database server name and database name with the names of the Datasets that have just been set up. So the data source will look like the one below. let Source = Sql.Database(DatabaseServer, DatabaseName), dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data] in dbo_DimDate
The advanced query window should look like the one show in figure 7 below. When the data source is queried the values of 'DatabaseServer' and 'DatabaseName' are replaced by the values from the relevant dataset. In this PowerBI desktop report l just replaced all the hard coded instances of the database server name and database name with the relevant parameter. Which allows me to switch between two databases with the ease l wanted, only with PowerBI desktop :-( Figure 7 - Showing the final M (Power Query Formula Language) text (data source) If you click on this link you can download a copy of a .PBIX with the two dataset's shown above. There is more you can do with this you will need to wait till part two of this blog post for more details.
For various reasons which l have now forgotten, l set up transactional replication for some clients. The result of this is l am the caretaker of transactional replication for two of our clients, what l lucky person l am ! T-SQL code used to check Transaction replication Once l got the process up and running (a very, very long and stressful story). At that point l realised that I would have to monitor these processes. Following some goggling this article was found with some TSQL written by SQLSoldier here. This worked for me and l used this to monitor replication by running the script and checking the results manually. Nagios output Our SysAdmin uses a tool called nagios to monitor out IT estate. So they suggested that a script could be written to monitor the replication process and send an alert if anything needed to be looked at. This sounded like an excellent idea, how to do it? The approach that was arrived at, involved using a PowerShell script which would run a SQL query examine the results then respond to Nagios, with the following values
Next we decided to use PowerShell to return the results. Following some goggling we found this page http://www.madeiradata.com/cross-server-replication-health-check-using-powershell/ which runs a SQL script and returned a data set. First challenge was the TSQL script from SQLSoldier was rather long for my first powershell script, l wanted some thing smaller. So l created a stored procedure based on the script, and placed it in my Distribution database on the replicated database server. Doing this had a two reasons, first less TSQL in the PowerShell script, second changing one of the parameters meant it returns different amounts of data. The stored procedure takes the following parameters ;
When considering this error message it was primarily to give some guidance as to what might be happening with the replication process. It is not intended to give any guidance on the underlying reason that is causing the issue. All that is required is that the Nagios process shows that there is something wrong. What ever the reason it requires some form of human intervention. Once an error condition has been detected then the issue will be handed to me to resolve. At least now l do not have to check the process periodically, now l just have to wait for a message from our sysadmin.