The knowledge base blog https://www.esdm.co.uk/knowledge http://www.rssboard.org/rss-specification mojoPortal Blog Module en-GB 120 no Mapserver returning WFS attributes scrambled when using msplugin_mssql2008 plugin We recently upgraded Mapserver to the current version for a project because we wanted the WFS id field to be populated in geojson which required a recent version of (OGR >=2.3) which was included in mapserver build 7.2.1

All went well and we modified the mapserver geojson output format to specify the field we wanted to use (originally named id in our case) but adding in the line: FORMATOPTION "LCO:ID_FIELD=id"

OUTPUTFORMAT
   NAME "geojson"
   DRIVER "OGR/GEOJSON"
   MIMETYPE "application/json; subtype=geojson"
   FORMATOPTION "STORAGE=stream"
   FORMATOPTION "FORM=SIMPLE"
   FORMATOPTION "LCO:COORDINATE_PRECISION=0"
   FORMATOPTION "LCO:ID_FIELD=id"
END

The field “id” needed to be included in the relevant map layer in the gml_include_items e.g.:

"gml_include_items"     "name,recsubtype,recsubtypecode,rectype,ref,uid,id"

After those changes – all seemed to be going well and the id was appearing in our WFS return

image

However when we looked at some of the other attributes we noticed scrambled characters (these had been fine with the previous version of mapserver). e.g. rectype in screenshot below

image

Our data was coming from SQL server using the msplugin_mssql2008 plugin– and if we set debug in the mapserver layer and looked at the log we saw:

msConvertWideStringToUTF8(): General error message. Unable to convert string in encoding 'UCS-2LE' to UTF8 An invalid multibyte sequence has been encountered in the input

So it looked as it something was going wrong in the unicode translation.  The workaround was refreshingly simple – we just had to alter the view that the data was being pulled from to force the conversion e.g. change:

‘Site’

to:

CONVERT(nvarchar(1),N’Site’)

then all was well again.


  ...]]>
https://www.esdm.co.uk/mapserver-returning-wfs-attributes-scrambled-when-using-msplugin_mssql2008-plugin () https://www.esdm.co.uk/mapserver-returning-wfs-attributes-scrambled-when-using-msplugin_mssql2008-plugin https://www.esdm.co.uk/mapserver-returning-wfs-attributes-scrambled-when-using-msplugin_mssql2008-plugin Thu, 21 Feb 2019 12:54:32 GMT
Using cloudscribe with Identity Server 4 with a SQL Server store on .Net Core 2 MVC securing APIs Well that’s a bit of a mouthful of a title – but it does describe what I was trying to do, and I didn’t find it easy. It’s not that complicated when you know how, but there are an awful lot of options and it wasn’t clear (to me at least) which ones to pick, so I hope this saves others some time.

Aim

I wanted a cloudscribe site running allowing users to be able to logon as usual using cookies, but I also wanted to have an API that could be secured for clients external to the site without having to use the cookie login and I wanted to use Identity Server 4 with a SQL data store.

Starting setup

Server

I used Joe Audette's excellent cloudscribe Visual Studio template https://www.cloudscribe.com/blog/2017/09/11/announcing-cloudscribe-project-templates and selected use MSSQL Server and the Include Identity Server integration option. Also selecting the 2 options in the “Expert Zone” gave me an example API to test with.

clip_image002

This gave me the basic website and was the one I wanted to add the secured API into. The VS project has a weather forecast API as an example.

Client

I then setup a separate MVC project using a basic template to act as the client application. This was all done using Visual Studio 2017 and .Net Core 2.

Server application

By default, the weather forecast API is accessible to all users. Try: http://localhost:35668/api/SampleData/WeatherForecasts

You can secure this by adding the [Authorize] statement to the API on the SampleDataController.cs page e.g.

[Authorize]
[HttpGet("[action]")]
public IEnumerable<WeatherForecast> WeatherForecasts()
{
    var rng = new Random();

but you will find this presents the standard cloudscribe logon screen to access it – not exactly what’s wanted for an API.

In order to solve this we need to use JWT authorisation alongside the standard cookie authentication, but tell the API to only secure using the JWT authorisation . This is done by filtering the authentication scheme used by the Authorize statement as below (you will probably have to add the following assemblies to your code)

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Authentication.JwtBearer;

and then add the filter to the authorize statement.

[Authorize(AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
[HttpGet("[action]")]
public IEnumerable<WeatherForecast> WeatherForecasts()
{
    var rng = new Random();

We have now told the API to authenticate using JWTBearer but we haven’t yet added JWT authentication to our applications pipeline. So in the startup.cs page we need to add in some assemblies:

using Microsoft.AspNetCore.Authentication.JwtBearer;

and then add the JWT service into the ConfigureServices method. (I added the statement below just above services.AddCors(options =>)

services.AddAuthentication(options =>
{
    options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
})

.AddJwtBearer(options =>
{
    options.Authority = "http://localhost:35668"; //No trailing /
    options.Audience = "api2"; //Name of api
    options.RequireHttpsMetadata = false;

});

Where:

.Authority is the address of the website with the api (note no trailing slash)

.Audience is the name you have given to the api in the Identity server 4 security setup (see more details below)

And then we need to tell our pipeline to use Authentication. So add the app.UseAuthentication() into the end of the ConfigureServices method just above the UseMVC call

app.UseAuthentication();
UseMvc(app, multiTenantOptions.Mode == cloudscribe.Core.Models.MultiTenantMode.FolderName);

Now if you try and access the api - http://localhost:35668/api/SampleData/WeatherForecasts - you should get an unauthorised message - even if you are logged onto the cloudscribe site using cookie authentication.

Identity Server 4 configuration (through cloudscribe)

Identity server has many options – which can be bewildering to start with. Full documentation is here: http://docs.identityserver.io/en/release/index.html

For our purposes here – I’m outlining the bare minimum that we need to setup security for our API, either using:

· A client credential using a secret

· A username and password

API resources

Under the admin menu in cloudscribe select security settings / API resources and create a new API record giving it a name (e.g. api2) making sure it matches the name you entered as the .Audience in the startup.cs .

Then we need to add a single scope record – called “allowapi2” in this example.

clip_image004

Client resources

Under the admin menu in cloudscribe select security settings / API Clients and create a new client (I’ve called it client2 – remember this name for when we make the call from the client application). Edit the new client record and add:

· Allowed Scope record – e.g. allowapi2 – this must match the scope we entered for the api and is used to specify which apis this client can access

clip_image006

· Client Secrets – the value is the SHA56 value of the secret we wish to use (in this example secret2) – at the moment the cloudscribe interface doesn’t do this conversion for us so we have to do it manually somewhere (e.g. I used string s = "secret2".ToSha256();)   

I added the secret using the web page and then pasted the converted secret direct into the relevant field in the record in the csids_ClientSecrets table in the database - but I think it would work equally well just pasting the converted value into the web page.

.ToSha256() is a string extension method in the IdentityModel assembly - this seems to do more than simply convert to sha256 - see https://github.com/IdentityModel/IdentityModel/blob/master/source/IdentityModel.Net45/Extensions/HashStringExtensions.cs.

It’s important that we set the secret type as well – in our example here it must be “SharedSecret”

Joe Audette has updated his nuget packages so saving a client secret now gives you a range of options for the secret type - in our example we need to pick "SharedSecret" and select to encrypt using Sha256 (see Joe's post in comments below for other options) which should make things easier.

clip_image008

· Allowed Grant types – we are entering “password” and “client_credentials”. These determine how we can authenticate from the client app as we see below in the next section. Password means that authentication can use a username / pwd combination (i.e. a cloudscribe login). Client_credentials means we can login using a client secret and don’t have to be a known user on the site.

clip_image010

Client application

To connect securely to the API using a client connection with a secret use:

var tokenClient = new TokenClient(disco.TokenEndpoint, "client2", "secret2");

To connect using a username and password use:

var tokenResponsePassword = await tokenClient.RequestResourceOwnerPasswordAsync("admin", "admin", "allowapi2");

Note that the user name is the user name not the email which can be used to login interactively.

The whole method in the controller looked something like this – the rest of the code is deserializing the JSON return from the API and putting it into an object that can be displayed on a view page

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using ESDM.Models;
using System.Net.Http;
using Newtonsoft.Json;
using IdentityModel.Client;
using IdentityServerClient.Models;
using IdentityModel;

and then

//Hosted web API REST Service base url  
string Baseurl = "http://localhost:35668";

public async Task<ActionResult> Index()
{
    List<WeatherForecast> ans = new List<WeatherForecast>();
     using (var client = new HttpClient())
    {
        // discover endpoints from metadata
        var disco = await DiscoveryClient.GetAsync(Baseurl);
        var tokenClient = new TokenClient(disco.TokenEndpoint, "client2", "secret2");
        var tokenResponse = await tokenClient.RequestClientCredentialsAsync("allowapi2");

//Example getting alternative token if you want to use username / pwd 
        var tokenResponsePassword = await tokenClient.RequestResourceOwnerPasswordAsync("admin", "admin", "allowapi2");

        // call api - change for tokenResponsePassword if you want to use username / pwd
        client.SetBearerToken(tokenResponse.AccessToken);

        var response = await client.GetAsync(Baseurl + "/api/SampleData/WeatherForecasts");
        if (response.IsSuccessStatusCode)
        {
            var content =  response.Content.ReadAsStringAsync().Result;
            ans = JsonConvert.DeserializeObject<List<WeatherForecast>>(content);
        }
        return View(ans);
    }

The model for the forecast data was:

namespace ESDM.Models
{
    public partial class WeatherForecast
    {
        public string DateFormatted { get; set; }
        public int TemperatureC { get; set; }
        public string Summary { get; set; }

        public int TemperatureF
        {
            get
            {
                return 32 + (int)(TemperatureC / 0.5556);
            }
        }
    }
}

And my view contained

@model IEnumerable<ESDM.Models.WeatherForecast>
<div>
    <ul>
        @foreach (var forecast in Model)
        {
            <li>@forecast.Summary</li>
        }
    </ul>
</div>

  ...]]>
https://www.esdm.co.uk/using-cloudscribe-with-identity-server-4-with-a-sql-server-store-on-net-core-2-mvc-securing-apis () https://www.esdm.co.uk/using-cloudscribe-with-identity-server-4-with-a-sql-server-store-on-net-core-2-mvc-securing-apis https://www.esdm.co.uk/using-cloudscribe-with-identity-server-4-with-a-sql-server-store-on-net-core-2-mvc-securing-apis Fri, 03 Nov 2017 22:14:00 GMT
How to link SQL Server / RDBMS tables into MS Access without a DSN Linking tables from RDBMS like SQL Server into an Access front end doesn't need an ODBC DSN to be configured, it can be done with a one-liner. I’m just posting this here because I keep having to look it up and can never find it.
This example is for SQL Server; change the driver for other databases. Run this from the immediate window or a macro (adjust server, table names and credentials to suit):

docmd.TransferDatabase acLink, "ODBC","ODBC;DRIVER=SQL Server;SERVER=myServer\myInstanceIfNeeded;DATABASE=myDatabaseName;Trusted_connection=yes;", acTable,"myTableName","myTableName",False,True

The final parameter ensures that login credentials are stored.

Of course our production applications like HBSMR and PACS manage database connections in a more sophisticated manner, with the ability to switch databases and define which tables/views are attached in one operation, but the above technique can be useful where MS Access is being used for quick querying/editing/reporting operations.


Crispin Flower  ...]]>
https://www.esdm.co.uk/how-to-link-sql-server--rdbms-tables-into-ms-access-without-a-dsn crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/how-to-link-sql-server--rdbms-tables-into-ms-access-without-a-dsn https://www.esdm.co.uk/how-to-link-sql-server--rdbms-tables-into-ms-access-without-a-dsn Thu, 30 Mar 2017 18:24:00 GMT
GeoServer WFS fails on SQL Server tables/views with GUID data type Just a quick note that might be useful to some.

The problem...

If you publish a SQL Server (in this case 2014) table or view with a GUID field using GeoServer (version 2.10), it will fail to make  a valid WFS because the &request=DescribeFeatureType response will not include these layers.

This manifests itself in various ways depending on the client, but in QGIS on trying to load the layer into the map we get this in the "Log Messages Panel":

Analysis of DescribeFeatureType response failed for url  srsname='EPSG:27700' typename=xxx:yyyurl='https://mysite.com/myworkspace/wfs' version='1.0.0' table="" sql=: it is probably a schema for Complex Features

which had me puzzled for quite some time!

The solution...

Changing the data source view to include this field with

CAST(myGUID AS varchar(36)) AS myGUID

then hitting "Reload feature type" in the GeoServer layer configuration screen fixes it. This changes the field data type from "UUID" to "string" in that screen. Or just omit the field entirely.

My guess is this will not help transactional WFS, but I have not tested this. Overall, GUIDs are still best avoided in GIS-land as they are so poorly supported by the database drivers.


Crispin Flower  ...]]>
https://www.esdm.co.uk/geoserver-wfs-fails-on-sql-server-tablesviews-with-guid-data-type crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/geoserver-wfs-fails-on-sql-server-tablesviews-with-guid-data-type https://www.esdm.co.uk/geoserver-wfs-fails-on-sql-server-tablesviews-with-guid-data-type Wed, 11 Jan 2017 22:23:00 GMT
Troubleshooting issues with VSS from backup chain and SQL Server Symptoms:

BackupChain fails almost as soon as it starts when backing up SQL Server VM

SQL server VM Application log is riddled with errors

Diagnostics:

Use VSSDiag

You can start by using the VssDiag http://backupchain.com/VssDiag.html and enter the exact time when the error was reported by backup chain(within 5 minutes) on the SQL Server VM

This will list any VSS issues from the log

I saw an error like:

A VSS writer has rejected an event with error 0x800423f4, The writer experienced a non-transient error.  If the backup process is retried,
the error is likely to reoccur.
. Changes that the writer made to the writer components while handling the event will not be available to the requester.
Check the event log for related events from the application hosting the VSS writer.


Operation:
   PrepareForSnapshot Event

Context:
   Execution Context: Writer
   Writer Class Id: {a65faa63-5ea8-4ebc-9dbd-a0c4db26912a}
   Writer Name: SqlServerWriter
   Writer Instance Name: SQL Server 2008 R2:SQLWriter
   Writer Instance ID: {d78d914d-4a11-4350-8334-af08bff3ce07}
   Command Line: "C:\Program Files\Microsoft SQL Server\90\Shared\sqlwriter.exe"
   Process ID: 2408

Look in VM Application Event Log

I saw several entries for databases:

The log for database 'CMSi-TNC-demo' is not available. Check the event log for related error messages. Resolve any errors and restart the database.

If in Enterprise manager you run DBCC CHECKDB  for the database mentioned you may see a host of errors

Fix:

I found the fix was to take the database in question  offline and then put it back online again.

Running DBCC CHECKDB again then reported no errors

After I had worked through all the databases that had this error – BackupChain could then run a VM backup


  ...]]>
https://www.esdm.co.uk/troubleshooting-issues-with-vss-from-backup-chain-and-sql-server () https://www.esdm.co.uk/troubleshooting-issues-with-vss-from-backup-chain-and-sql-server https://www.esdm.co.uk/troubleshooting-issues-with-vss-from-backup-chain-and-sql-server Thu, 12 Mar 2015 09:12:25 GMT
Confusion when renaming SQL Server views If you rename a view through the interface in SQL server, it can lead to strange behaviour later on if you ever come to recompile the view. Symptoms include views failing to recompile and, in extreme cases, the entire structure of a view (not necessarily the one you're recompiling) changing when the built-in sp_refreshview procedure is run.

Behind the scenes, SQL Server stores information about the views in its system tables;  the key one for our purposes here is Information_Schema.Views, which includes, among other things, the view's name and its SQL definition, something like the example below:

Name Definition
vieContacts

Create View vieContacts as Select ContactID, Name, PhoneNumber from ContactsTable

Now, suppose I rename vieContacts in the interface to OldVieContacts. This will update the name in the system table, but does not update the definition, so I end up with the following:

Name Definition
OldVieContacts

Create View vieContacts as Select ContactID, Name, PhoneNumber from ContactsTable

When I come to recompile the view, it will fail, giving me the error "Msg 208, Level 16, State 6, Procedure sp_refreshsqlmodule_internal, Line 75
Invalid object name 'vieContacts'.
" This is because the name doesn't match what is stored in the definition.

Unfortunately, the problem doesn't stop there. If I use the copy/paste funtion in SQL Managent Studio, I can copy a view, rename it so that it's a backup and then edit my original. This sounds like a good idea, but because of the problems that renaming causes, it can lead to major headaches later on. To illustrate why this can be so confusing, let's imagine I copy my vieContacts, rename the copy to OldVieContacts and modify my vieContacts to include another couple of fields. After this, I would have the following in Information_Schema.Views:

Name Definition
OldVieContacts

Create View vieContacts as Select ContactID, Name, PhoneNumber from ContactsTable

vieContacts

Create View vieContacts as Select ContactID, Name, PhoneNumber, Address, Email from ContactsTable

What makes this such a potential problem is that if I recompile OldVieContacts (and this might be done as part of a script that refreshes all the views) the inbuilt function reads the definition out of Information_Schema.Views, which tells it to rebuild vieContacts, so the result is that vieContacts gets overwritten and I would see my two new columns disappear from it. Worse, the refresh process also updates the definition in Information_Schema.Views, so now vieContacts would be stuck with the incorrect structure and no easy way to fix it.

So what can you do about these problems? Well, here are some suggestions:

  1. Prevention is better than cure. Don't rename views thorugh the SQL Management Studio interface.
  2. If you need to change the name of the view, drop the view and recreate it with the new name - you can use the "Script View As" option to create a SQL script that will do the drop and create for you, and you can just change the name in the script for the creation before running it.
  3. If you're lucky, you may be able to rename the view in the interface back to its old name. This isn't recommended though as it's not as sure as other methods.
  4. Rather than copying a view, just copy the SQL definition from the designer into a new blank view. This may seem more complex than just copying & pasting the view, but it will be better in the long term
  5. If you suspect that some of your views may suffer from this issue, you can check to see how severe the problem is by running this SQL command from a query window: Select TABLE_NAME, VIEW_DEFINITION from INFORMATION_SCHEMA.VIEWS where VIEW_DEFINITION not like '%' + TABLE_NAME + '%' - this will identify the views that need attention.
  6. You can repair a view that has suffered in this way by running an Alter View script to alter the view to the correct definition.

The best thing, though, is to avoid renaming views through the SQL Server Management Studio interface.


Timothy Derbyshire  ...]]>
https://www.esdm.co.uk/confusion-when-renaming-sql-server-views timothyd@esdm.co.uk (Timothy Derbyshire) https://www.esdm.co.uk/confusion-when-renaming-sql-server-views https://www.esdm.co.uk/confusion-when-renaming-sql-server-views Fri, 05 Dec 2014 14:47:00 GMT
WMS/WFS down when a GeoServer store is disabled after Windows updates Last night some of our servers restarted after scheduled updates, and we awoke to warning emails that some of our GIS web services were down. The affected WMS/WFS services were from GeoServer, so we knew immediately what to look for as this has happened before… the “Store” representing the connection to the GIS data repository – in this case a PostGIS database - had switched itself off, or become disabled:

GeoServer store disabled after server restart

Ticking the box and saving the Store fixed the problem.

Our hypothesis is that if the GeoServer service is re-started when the database server is not yet available, GeoServer automatically disables the affected Stores. [I later found this discussion in which one of the GroServer developers confirms that "the startup code checks if the stores are available, and if not, it
disables them".

We will try to prevent this happening again by ensuring that the GIS server is restarted well after the database servers, but we’d welcome any other ideas.


Crispin Flower  ...]]>
https://www.esdm.co.uk/wmswfs-down-when-a-geoserver-store-is-disabled-after-windows-updates crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/wmswfs-down-when-a-geoserver-store-is-disabled-after-windows-updates https://www.esdm.co.uk/wmswfs-down-when-a-geoserver-store-is-disabled-after-windows-updates Thu, 13 Nov 2014 09:31:00 GMT
SQL Server - Listing all tables in database, the space used and row counts This is a very useful SQL script to listing all of the tables in a SQL Server database with the row count and the amount of space being used by that table, handy for troubleshooting databases that are using more space that expected.

CREATE TABLE #RowCountsAndSizes (TableName NVARCHAR(128),rows CHAR(11),      
       reserved VARCHAR(18),data VARCHAR(18),index_size VARCHAR(18), 
       unused VARCHAR(18))

EXEC       sp_MSForEachTable 'INSERT INTO #RowCountsAndSizes EXEC sp_spaceused ''?'' '

SELECT     TableName,CONVERT(bigint,rows) AS NumberOfRows,
           CONVERT(bigint,left(reserved,len(reserved)-3)) AS SizeinKB
FROM       #RowCountsAndSizes 
ORDER BY   NumberOfRows DESC,SizeinKB DESC,TableName

DROP TABLE #RowCountsAndSizes

The original details including some other useful scripts can be found using the following link:

 http://www.sqlmatters.com/Articles/Listing%20all%20tables%20in%20a%20database%20and%20their%20row%20counts%20and%20sizes.aspx


Steve Ellwood  ...]]>
https://www.esdm.co.uk/sql-server-listing-all-tables-in-database-the-space-used-and-row-counts stevee@esdm.co.uk (Steve Ellwood) https://www.esdm.co.uk/sql-server-listing-all-tables-in-database-the-space-used-and-row-counts https://www.esdm.co.uk/sql-server-listing-all-tables-in-database-the-space-used-and-row-counts Mon, 22 Sep 2014 15:37:00 GMT
Remote connection to SQL server on dynamic ports SQL server has conventionally used port 1433.  However recently Microsoft made some changes and when suing named instanced you are likely to find your SQL Server is using dynamic ports. 

This means that SQL server chooses the port to use, which can cause issues if trying to open up the correct port number in a firewall for example.  Note that although it is called “dynamic”, once allocated (after SQL restart) it is likely to remain fixed – as on any subsequent restart SQL Server will try and use the same port and will only change port number if that port is no longer available.

If running over a LAN a client to the SQL Server can work out which port to connect on so long as sqlbrowser.exe is allowed in the firewall on the SQL Server machine.  This publishes the existence of the SQL Server instance and also which port to connect on.   This TechNet article explains that to connect, the client will send a UDP packet to port 1434 first to resolve the dynamic port.  Then it will connect as normal.

There is a good description of what needs opening on a SQL Server server firewall (https://cs.thomsonreuters.com/ua/toolbox/cs_us_en/kb/k73037522.htm).  Basically open port 1433 and allow exceptions for sqlservr.exe and sqlbrowser.exe.

Having done this you should be able to connect from any client so long as outbound ports are not blocked (unusual on a LAN). 

However if you are connecting from the internet you may need to open the specific port that sql server is running on.  This is a little trickier, as unless sqlbrowser.exe is allowed to communicate through your external firewall, the remote client will not know what port to attempt communication on.

On SQL Server

Start SQL server configuration manager and open the correct instance name and 32/64 bit option (double check you are using the right section!)

Make sure TCP/IP is enabled and then look at the properties.

image

You will see various sections for each LAN card connected (and various virtual ones). Normally you would expect all these to be set as Enabled=No and Dynamic port = 0

The setting we are really interested in is at the bottom in the IPAll section where it gives us the dynamic port number (52056 in this example).  If you have allowed sqlserver.exe and sqlbrowser.exe through the servers firewall there should be no configuration needed on the server itself – but any external firewall will need traffic on this port allowing and direct traffic to the SQL Server's IP address.

Once this is allowed – and the external machine can connect by specifying the port on the connection with a comma e.g.

mysqlserver.mydomain.com,52056\SQLEXPRESS

This will work in enterprise manager or in your connection string to SQL server

From the client you can run from the command line: 

  • NetStat -o

This will list all ports currently in use - so if you can connect to the SQL Server server from another machine on the LAN (or witha VPN up) this can also help identify port in use.

Note:  Please add / edit above if incorrect / incomplete as its based on my best understanding to date !

Useful links

http://dba.stackexchange.com/questions/47651/when-is-a-dynamic-port-dynamic

http://support.microsoft.com/kb/823938

http://blogs.msdn.com/b/bgroth/archive/2004/11/11/256190.aspx

 


  ...]]>
https://www.esdm.co.uk/remote-connection-to-sql-server-on-dynamic-ports () https://www.esdm.co.uk/remote-connection-to-sql-server-on-dynamic-ports https://www.esdm.co.uk/remote-connection-to-sql-server-on-dynamic-ports Sun, 02 Mar 2014 14:22:00 GMT
Using R with spatial data  

R is a cross platform statistical package which is becoming extremely widely used.  It is modular and so there are all sorts of add ins available, including a number of sophisticated tools for spatial analysis some of which run considerably faster than Arc / MapInfo.  Organisations are also starting to move to this for spatial analysis as its cheaper but more relevantly doesn’t suffer from the endless version changes that ArcGIS is particular forces on people.

R is generally used for a wide range of statistics – but also has the option to generate publication quality graphs (pdfs or images)

Getting started

R can be downloaded here: http://www.r-project.org/

Its also suggested (on Windows) to use r studio - http://www.rstudio.com/ which gives a more friendly IDE

R is a scripted tool – so it can be a steep learning curve but there are endless examples on the web to help get started.

Spatial example using OGR

This syntax is a little obscure – but potentially of considerable use for a wide range of spatial data.  Reading from shapefiles (or any other OGR supported datasource is easy / possible)

Read data from SQL server

#download and load rgdal libraries
install.packages("rgdal")
library(rgdal)

#Use ogrinfo to get information about a shape file called stations
ogrInfo("C:/ToFile/RNLI/data", "Stations")

#List the spatial layers available in a SQL server database
#Note you must have the standard OGR tables geometry_columns and spatial_ref_sys present and populated in your database
#Some people say you need a DSN as well – not sure you do …
ogrListLayers("MSSQL:server=SQL2008;database=CMSI-NE;trusted_connection=yes")

#Read a table of polygon objects into a new sitesPoly object
#Then plot them on a map
#N.B.  You can’t have binary data in your table – so if you do create a view or somesuch
sitesPoly <- readOGR(dsn='MSSQL:server=SQL2008;database=CMSI-NE;trusted_connection=true', layer='SitesGeom')
plot(sitesPoly)

image

#Select just one record from the layer
oneSite = subset(sitesPoly, sitesPoly$MI_PRINX == 420)

#Make sure the layer is assigned the correct projection
proj4string(oneSite) = CRS("+init=epsg:27700")

#Transform the layer to another projection
oneSite_latlong = spTransform(oneSite, CRS("+init=epsg:4326"))

#Load another library which contains some world boundaries
#Load up coast and countries outlines
library(rworldmap)
data(coastsCoarse)
data(countriesLow)

 

#plot countries outline limiting extent to Europe
plot(countriesLow, xlim = c(-10, 10), ylim = c(45, 60), legend = F)

#Add sites onto existing map setting colours

proj4string(sitesPoly_ll) = CRS("+init=epsg:27700")
sitesPoly_ll = spTransform(sitesPoly, CRS("+init=epsg:4326"))
plot(sitesPoly_ll, col = "red", border="red", Add=T)

 

Use a different library to do some plotting

install.packages("maps")
install.packages("mapdata")
library(maps)
library(mapsdata)

#Select polygons making up UK and plot
#Then plot a point on top
map('worldHires',
    c('UK', 'Ireland', 'Isle of Man','Isle of Wight','Wales:Anglesey'),
    xlim=c(-11,3), ylim=c(49,60.9))
points(-1.615672,54.977768,col=2,pch=18)

image


  ...]]>
https://www.esdm.co.uk/using-r-with-spatial-data () https://www.esdm.co.uk/using-r-with-spatial-data https://www.esdm.co.uk/using-r-with-spatial-data Sat, 28 Dec 2013 12:53:39 GMT
How to set up a MapInfo DBMS table from SQL Server without a DSN A quick note to say that it is definitely possible to use a SQL Server spatial table in MapInfo Professional without an ODBC DSN. There's not much about this on the web, and the normal methods of opening SQL Server tables all seem to want a DSN. Here's a resulting .TAB file (this assumes you have the table or view configured in the MAPINFO.MAPINFO_MAPCATALOG table):

 

!table
!version 1000
!charset WindowsLatin1

Definition Table
  Type ODBC
begin_metadata
"\IsReadOnly" = "FALSE"
"\MapInfo" = ""
"\MapInfo\TableID" = "2e88f9e5-7d4c-4645-b824-4ef6b5102c9e"
"\DATALINK" = ""
"\DATALINK\Query" = "Select *  From ""myDatabase"".""dbo"".""myTableOrView"""
"\DATALINK\ConnectionString" = "Description=aDescription;DRIVER=SQL Server;SERVER=myServer;Trusted_Connection=Yes;APP=MapInfo Professional®;WSID=blah;DATABASE=myDatabase"
"\DATALINK\ToolKit" = "ODBC"
"\CACHE" = "OFF"
"\MBRSEARCH" = "ON"
end_metadata


Crispin Flower  ...]]>
https://www.esdm.co.uk/how-to-set-up-a-mapinfo-dbms-table-from-sql-server-without-a-dsn crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/how-to-set-up-a-mapinfo-dbms-table-from-sql-server-without-a-dsn https://www.esdm.co.uk/how-to-set-up-a-mapinfo-dbms-table-from-sql-server-without-a-dsn Thu, 14 Nov 2013 18:41:00 GMT
Optimising SQL Server spatial indexing - let sql do the work.. Note – this will take a LONG time to run – quite possibly many hours - on a big dataset as it drops and recreates the index using different parameters – sledgehammer approach, but given the permutations available in SQL server spatial indexing and the level of confusion around optimising – this is the empirical approach.

lifted from here: http://stackoverflow.com/questions/2920948/selecting-a-good-sql-server-2008-spatial-index-with-large-polygons

Basically crate the SPROC listed at the bottom, then setup some example calls (3 or 4 different ones to minimise caching artefacts)

Then set it running and go on holiday …

Setup your parameters:

DECLARE @g1 VARCHAR(MAX)
SET @g1 = 'GEOMETRY::STGeomFromText(''POLYGON((252783.989267352 461095.507712082,260134.010732648 461095.507712082,260134.010732648 465068.492287918,252783.989267352 465068.492287918,252783.989267352 461095.507712082))'', 28992)'
DECLARE @g2 VARCHAR(MAX)
SET @g2 = 'GEOMETRY::STGeomFromText(''POLYGON((5580.146375 5340.100667,655613.186375 5340.100667,655613.186375 1219811.501845,5580.146375 1219811.501845,5580.146375 5340.100667))'',28992)'
exec sp_tune_spatial_index 'OBSVEG.OBS_Locations' ,'sidx_OBS_Locations_geom',2,8,@g1,@g2

Create Stored procedure:

N.B. Assumes geometry field is called geom
N.B. You MUST modify the SPROC to use extents for your SRID

USE [CMSI-OBSVEG]
GO
/****** Object:  StoredProcedure [dbo].[sp_tune_spatial_index]    Script Date: 10/01/2013 11:35:27 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[sp_tune_spatial_index]
(
  @tabnm                VARCHAR(MAX),    -- This parameter stores the name of the spatial table for which you are tuning the index
    @idxnm                VARCHAR(MAX),    -- This parameter stores the name of the spatial index of the named table
    @min_cells_per_obj    INT,            -- Minimum Cells Per Object to test on. Suggested to start at 2.
    @max_cells_per_obj    INT,            -- Maximum Cells Per Object to test on.
    
    /*    The test requires two geometry instances to use in test query 1 and 2.
        The first one should cover the area of default extent. The second should
        cover an area roughly the size of the area shown when zoomed in, panning
        around. It is required that the variable store a string that will create
        the geometry instance since this will be done within the procedure and 
        cannot be a variable of type: GEOMETRY. The SRID of these instances must
        match that of the table you are testing. */
    @testgeom1            VARCHAR(MAX),    -- This parameter stores the first geometry instance creation string that will be used in the test
    @testgeom2            VARCHAR(MAX)    -- This parameter stores the second geometry instance creation string that will be used in the test
    
)

AS

SET NOCOUNT ON;

/*    Prior to running this procedure, two tables are required. These tables are 
    created here to prepare for running the procedure.    */

PRINT 'Checking for required tables...'
IF EXISTS(SELECT 1 FROM sysobjects WHERE name IN ('cell_opt_perm', 'spat_idx_test_result'))
    BEGIN
        PRINT '... The "cell_opt_perm" and "spat_idx_test_result" tables exist.'
    END
ELSE
BEGIN
    PRINT '... Creating "cell_opt_perm" and "spat_idx_test_result" tables.'
    CREATE TABLE cell_opt_perm(
        [perm_id] [smallint] NOT NULL,
        [permutation] [nvarchar](4) NOT NULL,
        [level1] [nvarchar](6) NOT NULL,
        [level2] [nvarchar](6) NOT NULL,
        [level3] [nvarchar](6) NOT NULL,
        [level4] [nvarchar](6) NOT NULL
    )

    INSERT INTO cell_opt_perm ([perm_id], [permutation], [level1], [level2], [level3], [level4])
    VALUES (1,'LLLL','LOW','LOW','LOW','LOW'),
        (2,'LLLM','LOW','LOW','LOW','MEDIUM'),
        (3,'LLLH','LOW','LOW','LOW','HIGH'),
        (4,'LLML','LOW','LOW','MEDIUM','LOW'),
        (5,'LLMM','LOW','LOW','MEDIUM','MEDIUM'),
        (6,'LLMH','LOW','LOW','MEDIUM','HIGH'),
        (7,'LLHL','LOW','LOW','HIGH','LOW'),
        (8,'LLHM','LOW','LOW','HIGH','MEDIUM'),
        (9,'LLHH','LOW','LOW','HIGH','HIGH'),
        (10,'LMLL','LOW','MEDIUM','LOW','LOW'),
        (11,'LMLM','LOW','MEDIUM','LOW','MEDIUM'),
        (12,'LMLH','LOW','MEDIUM','LOW','HIGH'),
        (13,'LMML','LOW','MEDIUM','MEDIUM','LOW'),
        (14,'LMMM','LOW','MEDIUM','MEDIUM','MEDIUM'),
        (15,'LMMH','LOW','MEDIUM','MEDIUM','HIGH'),
        (16,'LMHL','LOW','MEDIUM','HIGH','LOW'),
        (17,'LMHM','LOW','MEDIUM','HIGH','MEDIUM'),
        (18,'LMHH','LOW','MEDIUM','HIGH','HIGH'),
        (19,'LHLL','LOW','HIGH','LOW','LOW'),
        (20,'LHLM','LOW','HIGH','LOW','MEDIUM'),
        (21,'LHLH','LOW','HIGH','LOW','HIGH'),
        (22,'LHML','LOW','HIGH','MEDIUM','LOW'),
        (23,'LHMM','LOW','HIGH','MEDIUM','MEDIUM'),
        (24,'LHMH','LOW','HIGH','MEDIUM','HIGH'),
        (25,'LHHL','LOW','HIGH','HIGH','LOW'),
        (26,'LHHM','LOW','HIGH','HIGH','MEDIUM'),
        (27,'LHHH','LOW','HIGH','HIGH','HIGH'),
        (28,'MLLL','MEDIUM','LOW','LOW','LOW'),
        (29,'MLLM','MEDIUM','LOW','LOW','MEDIUM'),
        (30,'MLLH','MEDIUM','LOW','LOW','HIGH'),
        (31,'MLML','MEDIUM','LOW','MEDIUM','LOW'),
        (32,'MLMM','MEDIUM','LOW','MEDIUM','MEDIUM'),
        (33,'MLMH','MEDIUM','LOW','MEDIUM','HIGH'),
        (34,'MLHL','MEDIUM','LOW','HIGH','LOW'),
        (35,'MLHM','MEDIUM','LOW','HIGH','MEDIUM'),
        (36,'MLHH','MEDIUM','LOW','HIGH','HIGH'),
        (37,'MMLL','MEDIUM','MEDIUM','LOW','LOW'),
        (38,'MMLM','MEDIUM','MEDIUM','LOW','MEDIUM'),
        (39,'MMLH','MEDIUM','MEDIUM','LOW','HIGH'),
        (40,'MMML','MEDIUM','MEDIUM','MEDIUM','LOW'),
        (41,'MMMM','MEDIUM','MEDIUM','MEDIUM','MEDIUM'),
        (42,'MMMH','MEDIUM','MEDIUM','MEDIUM','HIGH'),
        (43,'MMHL','MEDIUM','MEDIUM','HIGH','LOW'),
        (44,'MMHM','MEDIUM','MEDIUM','HIGH','MEDIUM'),
        (45,'MMHH','MEDIUM','MEDIUM','HIGH','HIGH'),
        (46,'MHLL','MEDIUM','HIGH','LOW','LOW'),
        (47,'MHLM','MEDIUM','HIGH','LOW','MEDIUM'),
        (48,'MHLH','MEDIUM','HIGH','LOW','HIGH'),
        (49,'MHML','MEDIUM','HIGH','MEDIUM','LOW'),
        (50,'MHMM','MEDIUM','HIGH','MEDIUM','MEDIUM'),
        (51,'MHMH','MEDIUM','HIGH','MEDIUM','HIGH'),
        (52,'MHHL','MEDIUM','HIGH','HIGH','LOW'),
        (53,'MHHM','MEDIUM','HIGH','HIGH','MEDIUM'),
        (54,'MHHH','MEDIUM','HIGH','HIGH','HIGH'),
        (55,'HLLL','HIGH','LOW','LOW','LOW'),
        (56,'HLLM','HIGH','LOW','LOW','MEDIUM'),
        (57,'HLLH','HIGH','LOW','LOW','HIGH'),
        (58,'HLML','HIGH','LOW','MEDIUM','LOW'),
        (59,'HLMM','HIGH','LOW','MEDIUM','MEDIUM'),
        (60,'HLMH','HIGH','LOW','MEDIUM','HIGH'),
        (61,'HLHL','HIGH','LOW','HIGH','LOW'),
        (62,'HLHM','HIGH','LOW','HIGH','MEDIUM'),
        (63,'HLHH','HIGH','LOW','HIGH','HIGH'),
        (64,'HMLL','HIGH','MEDIUM','LOW','LOW'),
        (65,'HMLM','HIGH','MEDIUM','LOW','MEDIUM'),
        (66,'HMLH','HIGH','MEDIUM','LOW','HIGH'),
        (67,'HMML','HIGH','MEDIUM','MEDIUM','LOW'),
        (68,'HMMM','HIGH','MEDIUM','MEDIUM','MEDIUM'),
        (69,'HMMH','HIGH','MEDIUM','MEDIUM','HIGH'),
        (70,'HMHL','HIGH','MEDIUM','HIGH','LOW'),
        (71,'HMHM','HIGH','MEDIUM','HIGH','MEDIUM'),
        (72,'HMHH','HIGH','MEDIUM','HIGH','HIGH'),
        (73,'HHLL','HIGH','HIGH','LOW','LOW'),
        (74,'HHLM','HIGH','HIGH','LOW','MEDIUM'),
        (75,'HHLH','HIGH','HIGH','LOW','HIGH'),
        (76,'HHML','HIGH','HIGH','MEDIUM','LOW'),
        (77,'HHMM','HIGH','HIGH','MEDIUM','MEDIUM'),
        (78,'HHMH','HIGH','HIGH','MEDIUM','HIGH'),
        (79,'HHHL','HIGH','HIGH','HIGH','LOW'),
        (80,'HHHM','HIGH','HIGH','HIGH','MEDIUM'),
        (81,'HHHH','HIGH','HIGH','HIGH','HIGH')
    
    CREATE TABLE spat_idx_test_result(
        [perm_id] [int] NOT NULL,
        [num_cells] [int] NOT NULL,
        [permut] [nvarchar](4) NOT NULL,
        [g1t1] [bigint] NULL,
        [g1t2] [bigint] NULL,
        [g1t3] [bigint] NULL,
        [g1t4] [bigint] NULL,
        [g2t1] [bigint] NULL,
        [g2t2] [bigint] NULL,
        [g2t3] [bigint] NULL,
        [g2t4] [bigint] NULL
    )
    
    INSERT INTO dbo.spat_idx_test_result
    VALUES (0,16,0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
END


/*    delete all rows from "spat_idx_test_result" table. This makes it ready to stuff in new results.
    !!!WARNING!!! if your test was interupted, the table will be cleared out and the test will
    begin from the beginning. You could try to modify this to start where you left off but
    I didn't have time and this worked well enough for me. */
DELETE FROM spat_idx_test_result
    WHERE perm_id != 0

/* set up counters */
DECLARE @a1 INT
DECLARE @a2 INT
DECLARE @a3 INT
DECLARE @a4 INT

/* set up variables to hold high/medium/low values and permutation to use in rebuilding
   the spatial index and recording stats */
DECLARE @lev1 VARCHAR(6)
DECLARE @lev2 VARCHAR(6)
DECLARE @lev3 VARCHAR(6)
DECLARE @lev4 VARCHAR(6)
DECLARE @permut VARCHAR(6)
DECLARE @num_cell VARCHAR(4)
DECLARE @time_str VARCHAR(20)
DECLARE @perm_id VARCHAR(20)

/* create variables to hold timestamps for beginning and ending of test queries */
DECLARE @start_t DATETIME
DECLARE @end_t DATETIME
DECLARE @elapse_t INT

/* begin looping through cell option permutations */
SET @a1 = @min_cells_per_obj
WHILE @a1 <= @max_cells_per_obj
    BEGIN
        SET @a2 = 1
        PRINT 'Started Testing for ' +CAST(@a1 AS VARCHAR(10)) +' cells per object'
        WHILE @a2 < 82
            BEGIN
                SELECT @lev1 = level1, @lev2 = level2, @lev3 = level3, @lev4 = level4 FROM cell_opt_perm WHERE perm_id = @a2
                SET @permut = '''' +(SELECT permutation FROM cell_opt_perm WHERE perm_id = @a2) +''''
                EXEC
                    ('
                        CREATE SPATIAL INDEX ' +@idxnm +' ON ' +@tabnm +' 
                        (
                            [geom]
                        )
                        USING  GEOMETRY_GRID 
                        WITH
                            (
                                BOUNDING_BOX =(-100, -100, 300000, 650000),
                                GRIDS =(LEVEL_1 = ' +@lev1 +' ,LEVEL_2 = ' +@lev2 +' ,LEVEL_3 = ' +@lev3 +' ,LEVEL_4 = ' +@lev4 +' ), 
                                CELLS_PER_OBJECT = ' +@a1 +' ,
                                PAD_INDEX  = OFF,
                                SORT_IN_TEMPDB = OFF,
                                DROP_EXISTING = ON,
                                ALLOW_ROW_LOCKS  = ON,
                                ALLOW_PAGE_LOCKS  = ON,
                                FILLFACTOR = 100
                            )
                        ON [PRIMARY]'
                    )
                PRINT 'Re-built index to ' +@permut
                SET @a3 = 1
                SET @a4 = 1
                WHILE @a3 < 5
                    BEGIN
                        SET @start_t = GETDATE()
                        EXEC
                            (
                                'CREATE TABLE #tmp_tab (shp GEOMETRY)
                                DECLARE @g1 GEOMETRY
                                SET @g1 = ' +@testgeom1 +'
                                INSERT #tmp_tab (shp)
                                    SELECT
                                        r.geom AS shp
                                    FROM
                                        ' +@tabnm +' r
                                    WHERE
                                        r.geom.STIntersects(@g1) = 1
                                DROP TABLE #tmp_tab'
                            )
                        SET @end_t = GETDATE()
                        SET @elapse_t = (SELECT DATEDIFF(MS, @start_t, @end_t))
                        SET @num_cell = CAST(@a1 AS VARCHAR(6))
                        SET @time_str = CAST(@elapse_t AS VARCHAR(20))
                        IF @a3 = 1
                            BEGIN
                                IF (SELECT TOP 1 perm_id FROM spat_idx_test_result) IS NULL
                                    BEGIN
                                        SET @perm_id = 1
                                    END
                                ELSE
                                    BEGIN
                                        SET @perm_id = CAST((SELECT MAX(perm_id+1) FROM spat_idx_test_result) AS VARCHAR(20))
                                    END
                                EXEC
                                    (
                                        'INSERT INTO spat_idx_test_result (perm_id, num_cells, permut, g1t' +@a3 +')
                                        VALUES (' +@perm_id +', ' +@num_cell +', ' +@permut +', ' +@time_str +')'
                                    )
                            END
                        ELSE
                            EXEC
                                (
                                    'UPDATE spat_idx_test_result
                                    SET
                                    num_cells = ' +@num_cell +',
                                    permut = ' +@permut +',
                                    g1t' +@a3 +' = ' +@time_str +'
                                    WHERE perm_id = ' +@perm_id
                                )
                        SET @a3 = @a3 + 1
                    END
                WHILE @a4 < 5
                    BEGIN
                        SET @start_t = GETDATE()
                        EXEC
                            (
                                'CREATE TABLE #tmp_tab (shp GEOMETRY) 
                                DECLARE @g2 GEOMETRY
                                SET @g2 = ' +@testgeom2 +'
                                INSERT #tmp_tab (shp)
                                    SELECT
                                        r.geom AS shp
                                    FROM
                                        ' +@tabnm +' r
                                    WHERE
                                        r.geom.STIntersects(@g2) = 1
                                DROP TABLE #tmp_tab'
                            )
                        SET @end_t = GETDATE()
                        SET @elapse_t = (SELECT DATEDIFF(MS, @start_t, @end_t))
                        SET @num_cell = CAST(@a1 AS VARCHAR(6))
                        SET @time_str = CAST(@elapse_t AS VARCHAR(20))
                        EXEC
                            (
                                'UPDATE spat_idx_test_result
                                SET
                                num_cells = ' +@num_cell +',
                                permut = ' +@permut +',
                                g2t' +@a4 +' = ' +@time_str +'
                                WHERE perm_id = ' +@perm_id
                            )
                        SET @a4 = @a4 + 1
                    END
                SET @a2 = @a2 + 1
            END
        SET @a1 = @a1 + 1
    END
PRINT 'Testing of ' +@tabnm +' spatial index: ' +@idxnm +' is complete!'

image


  ...]]>
https://www.esdm.co.uk/optimising-sql-server-spatial-indexing-let-sql-do-the-work () https://www.esdm.co.uk/optimising-sql-server-spatial-indexing-let-sql-do-the-work https://www.esdm.co.uk/optimising-sql-server-spatial-indexing-let-sql-do-the-work Tue, 01 Oct 2013 09:11:00 GMT
SQL Server Manager Studio 2008 bug expanding database list There is a bug within SSMS 2008 where if you connect to a db server using an account that does not have any rights to some of the database you can get an error when trying to expand the database list preventing you from accessing those databases that you do have rights to.

The error you receive looks something like this:

Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476

------------------------------
ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

------------------------------

The server principal "CMSi-NATBN-Admin" is not able to access the database "CAMS-CCW-WCP2010" under the current security context. (Microsoft SQL Server, Error: 916)

The error is caused because SSMS fails when trying to read some of the information about the databases, in oder to fix the problem you have to configure that specific SSMS installation to not read that data about that database server. You do this as follows:

  • Run SSMS and connect to required db server as sa or admin user account.
  • Press F7 to open the "Object Explorer Details" window
  • In the object explorer details window expand double click on "Databases"
  • Right click on the header bar ("Name", "Recovery Model", etc) and deselect "Collation"
  • Disconnect from the db server
  • Reconnect to db server using desired account and test

Steve Ellwood  ...]]>
https://www.esdm.co.uk/sql-server-manager-studio-2008-bug-expanding-database-list stevee@esdm.co.uk (Steve Ellwood) https://www.esdm.co.uk/sql-server-manager-studio-2008-bug-expanding-database-list https://www.esdm.co.uk/sql-server-manager-studio-2008-bug-expanding-database-list Fri, 27 Sep 2013 11:27:00 GMT
Invalid column name problem with GeoServer 2.3 and SQL Server views I have been getting this error with a WMS layer based on a SQL Server 2008 view, when using a GetFeatureInfo request:

java.lang.RuntimeException: java.io.IOException java.io.IOException null Invalid column name 'VALI, ROW_NUMBER() OVER (ORDER BY CURRENT_TIMESTAMP) AS _GT_ROW_NUMBER DFROM'.

I eventually discovered that the cause is having a field name containing the string "FROM". Specifically, I had

...t3.VALID_FROM AS VALIDFROM , t3.VALID_TO AS VALIDTO,...

This suggests some pretty sloppy SQL parsing in GeoServer, and for the moment I've resolved this by changing my alias to VALID_STARTDATE.


Crispin Flower  ...]]>
https://www.esdm.co.uk/invalid-column-name-problem-with-geoserver-23-and-sql-server-views crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/invalid-column-name-problem-with-geoserver-23-and-sql-server-views https://www.esdm.co.uk/invalid-column-name-problem-with-geoserver-23-and-sql-server-views Thu, 13 Jun 2013 16:01:00 GMT
Transferring user accounts from ASP.NET to MojoPortal Occasionally we need to transfer user accounts from existing ASP.NET membership based sites into MojoPortal

Note : if the databases are on the same server, or can be linked, then it is possible to take the SQL of the selects in the ExportASPNETUserData.bat file and use them to insert into the MojoPortal database tables. Using the .bat files will work wherever the databases are, as long as your PC can connect to both.
If you can't connect to both, run the XMLFormat and Export scripts on a machine that can connect to the ASP.NET membership database, then copy the files and run the Import script on a machine that can connect to the MojoPortal database

Note: It is possible to do the format/export/import in one script, but being able to sanity check the data before inserting is useful. You could also use staging tables to test the scripts before using the real MojoPortal tables

Note: The switches set in each file are those I found were best for the LHaR data. Change them if you feel it is appropriate for you data. See: BCP Utility Technical Reference

 

How-to

  • Set MojoPortal to use plain text passwords
  • Create a folder on your PC to contain the scripts
  • In this folder, create the following .bat files:
    • CreateMojoUserTablesXMLFormats.bat
    • ExportASPNETUserData.bat
    • ImportMojoPortalUserData.bat
  • In this folder create the following subdirectories:
    • data
    • formats
    • output
  • Create and run the CreateMojoUserTablesXMLFormats.bat file
  • Create and run the ExportASPNETUserData.bat file
    • Note: add as many user properties as required. To do this, select out each property in a separate select and then UNION the results together
    • Check the output directory for any errors
    • Check the data files have been created and contain the data you expect
  • Create and run the ImportMojoPortalUserData.bat
    • Check the output directory for any errors
    • Check the database tables have the data
  • Switch MojoPortal back to using encrypted passwords
  • Test a few logins

 

 

CreateMojoUserTablesXMLFormats.bat

set login=-U [Username] -P [Password] -S [SQL Server Instance]
set database=[Databasename].[Schema].
set switches= -c -x
set formatDir=[Full path to formats directory with trailing slash]

bcp %database%mp_Users format nul %login% %switches% -f %formatDir%mojoUsers.xml
bcp %database%mp_UserProperties format nul %login% %switches% -f %formatDir%mojoUserProperties.xml
bcp %database%mp_userRoles format nul %login% %switches% -f %formatDir%mp_userRoles.xml

 

ExportASPNETUserData.bat file

set login=-U [Username] -P [Password] -S [SQL Server Instance]
set dataDir=[Full path to data directory with trailing slash]
set database=[Databasename].[Schema].
set switches= -k -c

bcp "SELECT 0 AS UserID, 1 AS SiteID, aspnet_Users.Username AS Name, aspnet_Users.UserName AS LoginName, COALESCE (aspnet_Membership.Email,aspnet_Users.UserName + '@dummyemail.co.uk' ) AS Email, aspnet_Membership.LoweredEmail AS LoweredEmail, NULL AS PasswordQuestion, NULL AS PasswordAnswer, NULL AS Gender, aspnet_Membership.IsApproved AS ProfileApproved, NULL AS RegisterConfirmGUID, aspnet_Membership.IsApproved AS ApprovedForForums, 0 AS Trusted, CASE aspnet_Roles.RoleName WHEN 'xg' THEN 0 ELSE 1 END AS DisplayInMemberList, NULL AS WebsiteURL, NULL AS Country, NULL AS [State], NULL AS Occupation, NULL AS Interests, NULL AS MSN, NULL AS Yahoo, NULL AS AIM, NULL AS ICQ, 0 AS TotalPosts, NULL AS AvatarURL, 0 AS TimeOffsetHours, NULL AS [Signature], aspnet_Membership.CreateDate AS DateCreated, aspnet_Membership.UserId AS userGUID, NULL AS Skin, 0 AS IsDeleted, aspnet_Users.LastActivityDate AS LastActivityDate, aspnet_Membership.LastLoginDate AS LastLoginDate, NULL AS LastPasswordChangedDate, NULL AS LastLockoutDate, 0 AS FailedPasswordAttemptCount, NULL AS FailedPwdAttemptWindowStart, 0 AS FailedPwdAttemptCount, NULL AS FailedPwdAnswerWindowStart, aspnet_Membership.IsLockedOut AS IsLockedOut, NULL AS MobilePin, NULL AS PasswordSalt, NULL AS Comment, NULL AS OpenIDURI, NULL AS WindowsLiveID, '77C33D82-D6F0-49ED-95A7-84C11919AD94' AS SiteGUID, NULL AS TotalRevenue, userInfo.ForeName AS FirstName, userInfo.Surname AS LastName, aspnet_Users.Username as Pwd, 1 AS MustChangePassword, NULL AS NewEmail, NULL AS EditorPreference, '00000000-0000-0000-0000-000000000000' AS EmailChangeGuid, NULL AS TimeZoneID, '00000000-0000-0000-0000-000000000000' AS PasswordResetGuid FROM %database%aspnet_Membership INNER JOIN %database %aspnet_Users ON aspnet_Membership.UserId=aspnet_Users.UserId INNER JOIN %database%userInfo on aspnet_Membership.UserId=userInfo.userId INNER JOIN %database%aspnet_UsersInRoles ON aspnet_Membership.UserId=aspnet_UsersInRoles.UserId INNER JOIN %database%aspnet_Roles ON aspnet_UsersInRoles.RoleId = aspnet_Roles.RoleId" queryout %dataDir%lharUsers.txt %switches% %login% -o output/usersOutput.txt

bcp "select RoleName,UserId from %database%aspnet_Roles INNER JOIN %database%aspnet_UsersInRoles on aspnet_Roles.RoleId=aspnet_UsersInRoles.RoleId" queryout %dataDir%lharUsersRoles.txt %switches% %login% -o output/userRolesOutput.txt

bcp "SELECT newid() AS PropertyID, UserId AS userGUID, [String to use as MojoPortal PropertyName] AS PropertyName, Title AS PropertyValueString, NULL AS PropertyValueBinary, GETDATE() AS LastUpdatedDate, 0 AS IsLazyLoaded FROM %database%UserInfo
UNION ALL
SELECT newid() AS PropertyID, UserId AS userGUID, [String to use as MojoPortal PropertyName] AS PropertyName, Title AS PropertyValueString, NULL AS PropertyValueBinary, GETDATE() AS LastUpdatedDate, 0 AS IsLazyLoaded FROM %database%UserInfo" queryout %dataDir%lharUserProfiles.txt %switches% %login% -o output/userPropertiesOutput.txt

 

ImportMojoPortalUserData.bat

set login=-U [Username] -P [Password] -S [SQL Server Instance]
set database=[Databasename].[Schema].
set dataDir=[Full path to data directory with trailing slash]
set switches= -k -R
set formatDir=[Full path to outputs directory with trailing slash]

bcp %database%mp_users IN %dataDir%lharUsers.txt %login% %switches% -f %formatDir%mojoUsers.xml -o output/importUsers.txt
bcp %database%mp_userProperties IN %dataDir%lharUserProfiles.txt %login% %switches% -f %formatDir%mojoUserProperties.xml -o output/importUserProperties.txt
bcp %database%mp_userRoles IN %dataDir%lharUsersRoles.txt %login% %switches% -f %formatDir%mp_userRoles.xml -o output/importUserRoles.txt


  ...]]>
https://www.esdm.co.uk/transferring-user-accounts-from-aspnet-to-mojoportal () https://www.esdm.co.uk/transferring-user-accounts-from-aspnet-to-mojoportal https://www.esdm.co.uk/transferring-user-accounts-from-aspnet-to-mojoportal Mon, 04 Mar 2013 13:32:00 GMT
How to take a SQL Server backup without breaking the backup sequence When a SQL Server is configured to run scheduled full backups followed by a sequence of differential backups, taking an ad hoc backup will affect how backups can be restored, potentially breaking the ability to restore to particular points in the sequence before the ad hoc backup was taken. If a backup file is required, for example as a simple way of copying the database to another server, the solution is to take a “copy-only” backup.

This is achieved by ticking the “Copy-only Backup” checkbox, and setting an appropriate path for your backup file, as shown here in SQL Server 2008 Management Studio:

SQL Server copy-only backup

It can also be achieved using command-line T-SQL.

More information and explanation here from Microsoft.


Crispin Flower  ...]]>
https://www.esdm.co.uk/how-to-take-a-sql-server-backup-without-breaking-the-backup-sequence crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/how-to-take-a-sql-server-backup-without-breaking-the-backup-sequence https://www.esdm.co.uk/how-to-take-a-sql-server-backup-without-breaking-the-backup-sequence Sat, 06 Oct 2012 19:00:30 GMT
How to drop all connections to a SQL Server database Mike S-G has passed me this useful tool for dropping all connections from a database. This is required before deleting or re-naming databases, for example.

This creates a stored procedure in the “master” database, which can then be called to operate on any one database.

USE [master]
GO
/****** Object:  StoredProcedure [dbo].[usp_killprocess]    Script Date: 09/18/2012 14:55:47 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
 
CREATE PROCEDURE [dbo].[usp_killprocess] @dbname VARCHAR(128)
AS 
      SET nocount ON
      SET quoted_identifier OFF
      DECLARE @kill_id INT
      DECLARE @query VARCHAR(320)
      DECLARE killprocess_cursor CURSOR
      FOR
              SELECT    a.spid
              FROM      sysprocesses a
                        JOIN sysdatabases b ON a.dbid = b.dbid
              WHERE     b.name = @dbname
 
      OPEN killprocess_cursor
      FETCH NEXT FROM killprocess_cursor INTO @kill_id
      WHILE ( @@fetch_status = 0 ) 
            BEGIN
                  SET @query = 'kill ' + CONVERT(VARCHAR, @kill_id)
                  EXEC (@query)
                  FETCH NEXT FROM killprocess_cursor INTO @kill_id
            END
      CLOSE killprocess_cursor
      DEALLOCATE killprocess_cursor
 
--usage
--exec usp_killprocess "mydatabasename" 

Example usage:

usp_killprocess  myDatabaseName


Crispin Flower  ...]]>
https://www.esdm.co.uk/how-to-drop-all-connections-to-a-sql-server-database crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/how-to-drop-all-connections-to-a-sql-server-database https://www.esdm.co.uk/how-to-drop-all-connections-to-a-sql-server-database Tue, 18 Sep 2012 14:05:00 GMT
Multiple SQL Server log-in failures caused by connection string error - HBSMRv3 A SQL Server DBA reported many login failures in his SQL logs - variously quoted as “hundreds per day” and “50+ in a minute”. These failures were not apparent to system users. Two user communities were accessing this data - one using trusted connections the other using SQL Server Authentication. Only the Trusted Connection group were affected by this issue. The connection string included the clause <Trusted_Connections=Yes>, this should have been <Trusted_Connection=Yes>. Once the connection string had been corrected the log-in failures stopped.

Example of repeated log-in failures from SQL logs:

2012-04-18 09:19:44.50 Logon       Login failed for user 'smruser'. [CLIENT: 159.15.250.xxx]
2012-04-18 09:19:44.53 Logon       Error: 18456, Severity: 14, State: 5.
2012-04-18 09:19:44.53 Logon       Login failed for user ‘yyyy.xxxx’. [CLIENT: 159.15.250.xxx]
2012-04-18 09:19:44.54 Logon       Error: 18456, Severity: 14, State: 5.
2012-04-18 09:19:44.54 Logon       Login failed for user 'smruser'. [CLIENT: 159.15.250.xxx]
2012-04-18 09:19:46.87 Logon       Error: 18456, Severity: 14, State: 5.
2012-04-18 09:19:46.87 Logon       Login failed for user ‘zzzz.yyyy’. [CLIENT: 159.15.250.xxxx]
2012-04-18 09:19:46.90 Logon       Error: 18456, Severity: 14, State: 5.


  ...]]>
https://www.esdm.co.uk/multiple-sql-server-log-in-failures-caused-by-connection-string-error-hbsmrv3 () https://www.esdm.co.uk/multiple-sql-server-log-in-failures-caused-by-connection-string-error-hbsmrv3 https://www.esdm.co.uk/multiple-sql-server-log-in-failures-caused-by-connection-string-error-hbsmrv3 Mon, 23 Apr 2012 08:28:00 GMT
Accessing SQL Server 2008 Express (on Win 7 OS) over the network I hit a problem where I needed to enable network access to SQL Server 2008 Express R2 which was on a Win7 64bit OS. the solution was previously posted at: http://www.sevenforums.com/system-security/58817-remote-access-sql-server-express-2008-windows-7-a.html

 

Here are some step by step details for getting this to work:


1) Enable remote connections to Sql Server. (SSMS, connect to DB, right-click and select properties and then Connections…make sure remote conenctions chkBox is ticked).

2) Set up Inbound and Outbound rules for TCP port 1433

3) Set up Inbound and Outbound rules for UDP port 1434

4) Set up inbound and outbound exceptions for sqlservr.exe. I am running a 64 bit machine so this file was located here: C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Binn\sqlservr.exe

5) Set up inbound and outbound exceptions for sqlbrowser.exe. Again, I am running a 64 bit machine so the file was located here: C:\Program Files (x86)\Microsoft SQL Server\90\Shared\sqlbrowser.exe

 

And thats it.


Laine Skinner  ...]]>
https://www.esdm.co.uk/accessing-sql-server-2008-express-on-win-7-os-over-the-network laines@esdm.co.uk (Laine Skinner) https://www.esdm.co.uk/accessing-sql-server-2008-express-on-win-7-os-over-the-network https://www.esdm.co.uk/accessing-sql-server-2008-express-on-win-7-os-over-the-network Wed, 21 Mar 2012 15:47:00 GMT
Calculating the extent of the geometry in a SQL Server table Andy Brewer passed me this, adapted from http://barendgehrels.blogspot.co.uk/2011/04/extent-of-sql-server-spatial-table.html

This is a useful script, using CTEs (common table expressions) to calculate the extent of the geometry in a SQL Server table. Replace “geom” with the name of your geometry column and “MyTable” with the name of your table.

Note that this script depends on STEnvelope returning a rectangular extent with its first point at the SW corner and its third in the NE corner. Tests from the author of the blog above confirm the SQL Server currently does this.

with
  cte_envelope as
  (
    select geom.STEnvelope() as envelope from MyTable
  ),
  cte_corner as
  (
    select envelope.STPointN(1) as point from cte_envelope
    union all
    select envelope.STPointN(3) from cte_envelope
  )
select min(point.STX) as min_x, min(point.STY) as min_y, max(point.STX) as max_x, max(point.STY) as max_y
from cte_corner

Crispin Flower  ...]]>
https://www.esdm.co.uk/calculating-the-extent-of-the-geometry-in-a-sql-server-table crispin.flower@idoxgroup.com (Crispin Flower) https://www.esdm.co.uk/calculating-the-extent-of-the-geometry-in-a-sql-server-table https://www.esdm.co.uk/calculating-the-extent-of-the-geometry-in-a-sql-server-table Tue, 20 Mar 2012 19:18:00 GMT