Category Archives: Technical

Configuring Python and GeoDjango for Amazon Web Services Deployment

Quick run down of configuring Geodjango to run on amazon web service’s Elastic Beanstalk (EBS). This assumes you have a separate database instance up and running and that you’ve setup the project to use Elastic Beanstalk (i.e. eb init). Configuration files are broken into the tasks they handle and are read automatically by Elastic Beanstalk.

  1. Create two directories in the root of the project (.ebextensions).
  2. Under .ebextensions add the following files:
    • commands.config – Executes shell commands to deploy static files and migrate database changes. Enables the enterprise linux yum repository so that you can install the GIS libraries through yum (very cool).
    • django.config – default settings for Elastic Beanstalk.
    • environment.config – Setup any environmental variables that you want.
    • packages.config – Tells EBS what packages to install.
  3. commands.config:
    command: "source /opt/python/run/venv/bin/activate && python collectstatic --noinput"
    command: "source /opt/python/run/venv/bin/activate && python migrate"
    leader_only: true
    command: "sudo yum-config-manager --enable epel"
  4. django.config:
    WSGIPath: kyp_backend/
  5. environment.config
    - option_name: DB_NAME
    value: [db_name]
    - option_name: DB_USERNAME
    value: [username]
    - option_name: DB_PASSWORD
    value: [password]
    - option_name: DB_HOSTNAME
    value: [amazon host name]
    - option_name: DB_PORT
    value: [port]
    - option_name: HOST_ENVIRONMENT
    value: [development/test/production]

  6. packages.config:
    postgresql94-devel: []
    geos: []
    binutils: []
    gdal-devel: []
    proj: []
    gdal: []

OGR/GDAL Examples

Generate a shapefile from a postgresql table
ogr2ogr -f “ESRI Shapefile” [Shapefile Name].shp PG:”dbname='[DB Name]’ host='[Host Name]’ port='[Port Number]’ user='[User Name]'” [Table Name]

Format Types

  • ESRI Shapefile
  • MapInfo File
  • S57
  • DGN
  • Memory
  • BNA
  • CSV
  • GML
  • GPX
  • KML
  • GeoJSON
  • Interlis 1
  • Interlis 2
  • GMT
  • GPKG
  • SQLite
  • ODBC
  • WAsP
  • MSSQLSpatial
  • PostgreSQL
  • DXF
  • Geoconcept
  • GeoRSS
  • GPSTrackMaker
  • PGDump
  • GPSBabel
  • SOSI
  • GFT
  • GME
  • CouchDB
  • ODS
  • XLSX
  • ElasticSearch

Postgresql/Postgis Cheatsheet

Export table data (data only):
pg_dump –column-inserts -a -h [host name] -U [user] -d [database] -t [table] > [output directory]

Export table data and table:
pg_dump –column-inserts -h [host name] -U [user] -d [database] -t [table] > [output directory]

Execute script:
psql -h [host] -U [user] -d [database] -f [input file]

SDE Examples for Oracle

Export from database to SDE export file
sdeexport -l [SOURCE_TABLE],[GEOMETRY FIELD] -f .\[OUTPUT FILE NAME].sdx -i sde:oracle11g -u [USERNAME] -p [PASSWORD]@[TNS NAME]

Export from database to shapefile
sde2shp -o init -l [SOURCE_TABLE],[GEOMETRY FIELD] -f [OUTPUT FILE NAME].shp -t polygonZ -a all -i sde:oracle11g –

Import to database from SDE export file
sdeimport -o CREATE -t [OUTPUT_TABLE_NAME] -f [INPUT_FILE_NAME] -g RTREE -k [DBTUNE KEYWORD] -i sde:oracle11g -u [USER] -p [PASSWORD]@[TNS NAME]

Delete table from database including SDO/SDE metadata.
sdetable -o delete -t [TABLE_NAME] -u [USER] -p [PASSWORD]@[TNS NAME] -i sde:oracle11g

Drop and Create Spatial Index


Oracle Spatial Examples

Get project information (note table alias required).
select c.[GEOMETRY FIELD].sdo_srid
from [TABLE] c;

Get the geometry type
select c.[GEOMETRY_FIELD].sdo_gtype
from [TABLE_NAME] c;

Interpret results with:

Ensure table is registered in Oracle Spatial

select *
from mdsys.sdo_geom_metadata_table;

Get spatial tolerance of table
SELECT sdo_dimname, sdo_tolerance
select DIMINFO
from user_sdo_geom_metadata
WHERE table_name = ‘[TABLE_NAME]’
) ;

Create point geometry from x/y values
2001, — point data
mdsys.SDO_point_type([X Ordinate], [Y Ordinate], NULL),
from dual;

Create Spatial Index
CREATE INDEX “[Index Name]” ON “[Table Name]” (“[Geometry Field]”)

Add Geometry Metadata


Determine Spatial Relationship Between Geometries
, sdo_geom.relate(
, ‘determine’
, b.[Geometry_Field]
, [Tolerance]
[Table_Name_1] a, [Table_Name_2] b;

Testing Coordinate Transformation in PostGIS

I spend a lot of time thinking about which projection to use for a certain project,. Sometimes picking the perfect projection bites me when things change (such as the area of interest expands). In my recent case I have been avoiding reprojecting my data because I haven’t done this before in PostGIS and I’ve been convinced that this will be a major undertaking. After a minute’s worth of research I realize I may have been dead wrong. In fact, the process seems so easy and quick that I am leery. So here it is, my attempt at convincing myself that reprojecting in PostGIS is an inconsequential task.

First I will create some test data using a table in UTM 17 N meters (EPSG:26917).


Let’s prove this worked:

SELECT 'Table_SRID' as INFO,find_srid('public', 'ProjTest', 'geom')
SELECT st_AsText(geom), st_SRID(geom) FROM "ProjTest";



Next we’ll do the transformation to US National Atlas Equal Area

ALTER COLUMN geom TYPE geometry(Point,2163)
USING ST_Transform(geom,2163);

Then check the results:


Finally let’s display it in QGIS:

First I’ll leave the project projection in QGIS to UTM 17N, looks as expected.

Just to double check, I’ll change the project projection to 2163. Surprisingly things look crazy. Notice that the offset is not predictable, meaning that it might not be a datum shift.

Interestingly, projecting “on the fly” to other coordinate systems does not reproduce this offset. So, perhaps it is the Google Physical web service that I am using that is having problems with reprojecting to US National Atlas Equal Area and not the reproduction of the points. To test that, we will reproject one of the coordinates back to UTM from US National Atlas Equal Area using gdal.

gdaltransform -s_srs EPSG:2163 -t_srs EPSG:26917
1480246.71738073 -990001.583001807

Original = 250230.930254224, 3854029.95272766
GDAL Transformation = 250230.930251404, 3854029.95272553

Looks like the reprojection was successful, just need to figure out the problems with reprojecting the base map.

In summary, reprojecting data in PostGreSql is trivial. The work can be done in place and scripts are very succinct. I am extremely pleased with this implementation, thanks guys!

GeoForst Lite, Adding a New Property – Part 2


Last time we downloaded and configured the software and data sources we needed to start adding data to the project. If you missed it, just visit Getting Started with GeoForst Lite.

If your familiar with GIS, this post won’t be worth much, but skimming through will get you familiar with some of the datasets. In this post we are focusing on adding data to the Land_Area table which acts as a container for properties.

Alternately, if you aren’t familiar with GIS, this will give you enough knowledge to add a property to the map. There are links to some QGIS tutorials in the post that are worth visiting.

Helpful Items

There are a few things that your should have or know before you start, if possible. Here are a few that may help you on your project:

  • The ability to find your property on a map.
  • A platt map, or some other map of the boundaries and corners of your property.
  • Intimate knowledge of your land.

Adding a property

The first thing we want to do is add the boundary of your property, so we have an area to work within. Before we do that we need to find the property somehow in the map. This is where the osmSearch plugin will help. To make the osmSearch panel visible go to the View menu View > Panel > osmSearch. You can dock this anywhere you like, or not, your choice.

Adding osmSearch panel.

For this turn on the Google Physical layer that we added through the OpenLayers plugin and type in the city and state of the location into the osmSearch panel and hit Search. The location will then appear in the results field. Clicking on the result will pan you to the location you chose. Unfortunately it will not zoom, so you’ll need to choose an appropriate scale, 1:500,000 is a good place to start.

Note: osmSearch will highlight the feature you searched for in red, which can be annoying once you’ve zoomed in to the location. Dismiss it by clicking the “x” in the right hand corner of the search text box.

Zoom to place

At this point you should be able to find your property and zoom into so that the entire property is just visible. You probably want to turn off the Google Physical layer and turn on the Bing Aerial layer once you start zooming in, you’ll have to feel that transition out for yourself.


Adding a Property to the Map

Next will will edit the layer named “Land Areas” to add the property boundary. To do this select the layer, right click on it, and select toggle editing. At this point the layer is ready for editing.


Click on the add feature button on the editing toolbar and begin delineating your property by digitizing the boundaries. For more information on editing layers in QGIS, visit digitizing an existing layer.


Digitize your property boundary and add the attributes as shown below.
Adding Property Attributes

Stop editing, and save your edits. You should have something like this:
Newly added property.

Save the project before you close down QGIS and next post we will add some forest stands.

GeoForst Lite, Getting Started – Part 1


GeoForst Lite is a standalone program consisting of three parts, QGIS, a local datastore, and a QGIS map.


  1. Download QGIS version 2.x and install.
  2. Download GeoForst Lite.
  3. Unzip GeoForst Lite and move it to any directory you choose. Your home or documents directory is a fine place for it.

Adding Plugins

At this point you should have the ability to start the QGIS. There are a couple of ways to make this easy. Let’s step through the process.

Begin by opening the QGIS application, it should open to a blank project. At this point we need to add a couple plugins to make life easy. To do this, select the Plugins menu and select Manage and Install Plugins….


Now we want the good stuff, so let’s enable the experimental plugins and we are ready to get our plugins.


Okay lets get two plugins, the OpenLayers plugin and the osmSearch plugin. OpenLayers will let us display a variety of excellent base layers such as Google Terrain and Bing Imagery. This is really fantastic stuff. The osmSearch plugin lets you find locations by place name, which beats finding a location by pan and zoom alone. Search for those Plugins and install them.

Installing the OpenLayers plugin…

Finally let’s add the OSM place search plugin…

At this point the setup is complete. Open the GForst_Lite.qgs file that you unzip and placed on your file system and there will be three layers under the External Layers group.


Note: The OpenLayers plugin layers need a kick in the pants to scale correctly when first added. Zoom in or pan a bit to force it to redraw. It’s wonky, but a small price to pay for a great addition to QGIS.

Next up

Adding Your Property to GeoForst.

SQL Spatial Examples

Posting these here for my own reference, but use them yourself if you’d like.

Select Well Known Text from Geometry
select [Geometry Column].STAsText() from [Table];

Create WKT from lat/long fields, good for insert/update statements
select GEOGRAPHY::STGEomFromText(‘POINT (‘ + convert(varchar(20),[LONGITUDE]) + ‘ ‘ + convert(varchar(20),[LATITUDE]) +’)’, 4269) from [TABLE];

Intersection Example:
SELECT g.* , g.[Geometry Field].STAsText() FROM [Schema].[Table] g WHERE [Geometry Field].STIntersects(geometry::STGeomFromText(‘POLYGON ((-131.232584635417 55.4096875, -131.230598958333 55.4096875, -131.230598958333 55.4100455729167, -131.232584635417 55.4100455729167, -131.232584635417 55.4096875))’, [EPSG ID])) = 1

Get Area of Intersection
from [TABLE_NAME_1] t2,

Create Spatial Index:
CREATE SPATIAL INDEX [sidx_tablename_geom] ON [Schema].[Table]
[Geometry Field]
BOUNDING_BOX =(-173.116318241729, 52.7677349180939, -129.473430300096, 71.3903800225212), GRIDS =(LEVEL_1 = MEDIUM,LEVEL_2 = MEDIUM,LEVEL_3 = MEDIUM,LEVEL_4 = MEDIUM),

Get Bounding Box of Feature Class:
SELECT [Geometry Field].STEnvelope() as envelope from [Schema].[Table]
SELECT envelope.STPointN(1) as point from ENVELOPE
UNION ALL select envelope.STPointN(3) from ENVELOPE

MIN(point.STX) as MinX,
MIN(point.STY) as MinY,
MAX(point.STX) as MaxX,
MAX(point.STY) as MaxY

Use GDAL to Insert Data from Shapefile to MS SQL:
ogr2ogr -f MSSQLSpatial “MSSQL:server=[Server Name];database=[DB Name];trusted_connection=yes” “…\[Shapefile Name].shp”

SRID from geometry field
select [Geometry Field Name].STSrid FROM [TABLE NAME];

Using a Power Mac G5 as a ZFS Time Machine Backup Server

If you are considering setting up a home network file server using Zettabyte File Server (ZFS) technology the Power Mac G5 should be given strong consideration. For one, these machines are inexpensive, they have room for lots of disks (see G5 Jive), they are beautiful, and the network natively with your other Mac hardware using AFP.

Documentation on using OS X as a ZFS file server and Time Machine backup location is scarce. After reading about using OpenSolaris for this purpose Time Machine Volumes with ZFS and AFP and some serious trial and error I have a working implementation. To help others from struggling through it, here are my notes and I hope you find them useful.

To get started make sure you have installed MacZFS and followed the MacZFS Getting Started instructions to create a pool. How one creates a ZFS pool is highly variable, situation dependent, and very well documented, I can’t do a better job so I won’t repeat it.

Here are the baseline ZFS pools for reference, before creating a new example pool:

zpool status

pool: mercury
state: ONLINE
scrub: none requested

mercury ONLINE 0 0 0
disk0s2 ONLINE 0 0 0

errors: No known data errors

zfs list

mercury 338G 2.36T 172G /Volumes/mercury
mercury/Music 27.6G 2.36T 27.6G /Users/torsten/Music
mercury/tm-quercus 136G 364G 136G /Volumes/tm-quercus
mercury/tm-smorin 2.86G 247G 2.86G /Volumes/tm-smorin

Okay now let’s make a new pool and ready it for sharing:

-- Create that pool!
zfs create mercury/backups

-- Lets limit the backup pool to 500 GB.
zfs set quota=500G mercury/backups

chmod -R 770 /Volumes/mercury/backups/

-- Change the owner to the backup user.
chown -R torsten /Volumes/mercury/backups/

-- Tell Time Machine to recognize this as a supported device.
touch /Volumes/mercury/backups/

So far things are pretty similar to how you’d do this on OpenSolaris, let’s take a look:

zfs list

mercury 338G 2.36T 172G /Volumes/mercury
mercury/Music 27.6G 2.36T 27.6G /Users/torsten/Music
mercury/backups 23.5K 500G 23.5K /Volumes/mercury/backups -- Our new pool!
mercury/tm-quercus 136G 364G 136G /Volumes/tm-quercus
mercury/tm-smorin 2.86G 247G 2.86G /Volumes/tm-smorin

Now we need to trick OS X into sharing our ZFS pool. Here we create a new directory in the pool. This needs to be done because, oddly, Finder will see the new pool as the parent pool.

mkdir /Volumes/mercury/backups/time-back

Since we are running OS X Panther (version 10.5), System Preferences won’t display many of the directories in the root of the OS. So, to make this new filesystem accessible to GUI we need to make a symlink in the user’s home directory.

ln -s /Volumes/mercury/backups/time-back ~/time-back

Now we are ready to share! Go to System Preferences > Sharing and add the newly created directory as a share.
Sharing pane in System Preferences

Let’s see what Time Machine thinks. First let’s mount the drive onto the Mac laptop (mine runs Lion 10.7). Open Finder and press Apple-K to connect to a server. Then type afp://[Server Name or IP]/[Share Name] like so:

Finally, open Time Machine Preferences and select the backup disk

View of Time Machine with the ZFS share visible.

** If the drive does not appear in Time Machine Preferences do this on the client (in this case the laptop):

defaults write TMShowUnsupportedNetworkVolumes 1

If all went well you’re now storing your backups on ZFS to a remote server!

Adding the Spatialite GeoServer Plugin on OS X

Start by going to page and reading through their documentation. It didn’t quite work out for me, so I thought I’d be more specific and share with you the steps I took to make this work.

1) Get the plugin, the url given in the Geoserver manual is incorrect, as the plugin is not in the trunk of the project. Instead use the following command and/or url to download the plugin –

curl --O

2) Unzip the archive file –


3) Copy both the jar files to Geoserver lib –

cp gt-jdbc-spatialite-2.7-SNAPSHOT.jar /Applications/

cp sqlite-jdbc-spatialite-3.7.2-1.jar cp gt-jdbc-spatialite-2.7-SNAPSHOT.jar /Applications/

4) Install geos and proj-
sudo port install geos
sudo port install proj

5) Startup/restart Geoserver.

6) Add a new store and you should see Spatialite as an option.

Getting a list of Installed Features from Server08 using PS

This is great for getting a list of installed features from a windows server. So far I’ve only tried do this locally aside from a quick test on a Windows 7 machine, from which it failed.

In powershell:
IMPORT-Module servermanager


Fix Orphaned Users SQL2008 R2

General pattern for this common problem following a database restore. Note that the syntax has changed from SQL2005.

SQL 2008 R2 Syntax::

USE <database_name>;
sp_change_users_login @Action='Report';
sp_change_users_login @Action='update_one', @UserNamePattern='',

SQL 2005 Syntax:

USE <database_name>;
EXEC sp_change_users_login 'Report';
EXEC sp_change_users_login 'Auto_Fix', 'user';


Here is a post that I hope will help some poor fool save a lot of time.

This is the situation I found myself in:
Sandbox environment:
1) Within Visual Studio, Built and executed the SSIS project pointed at the sandbox database – no errors.
2) Within Visual Studio, Built and executed the SSIS project pointed at the user acceptance testing database – no errors.

User acceptance environment:
1) Deployed package to Integration Services and validated package without execution – no errors.
2) Executed deployed package, received a DTS_E_PRODUCTLEVELTOLOW error on the first component.

After reading forums and many failed attempts to resolve the problem I found that SSIS attempts to run on the computer on which you are using management studio. That means that if you are executing a package on a remote sql server from a machine that doesn’t have MSSQL installed it will fail. Instead, execute the package from a job on the remote SQL Server, install MSSQL on your execution platform, or install management studio on the remote SQL Server and execute the package from there.


Automating Database Deployments from Subversion

There was a bit of a learning curve going from manually deployed SQL statements and SSIS packages to automatic deployments, but this is what I did and it seems to work so far.

1. Create a database project in visual studio and setup projects to your liking making sure it builds and deploys correctly, a chore in itself.

2. Get SLIK Subversion command line tools for Windows ( and install. The windows equivalent of $PATH is updated following a restart of your workstation so that you can execute the SVN command line commands without including the path to the executable. In short, restart your workstation!

3. Create a daily build directory to which you can check out from SVN.

4. Write batch script to checkout project from SVN. Here’s an example:
svn export –force https://[subversion url]/project “[build directory]”


5. Write a batch script to build the database project using MSBUILD. Note that the executable is not in the environmental path variable, so the executable and its path must be fully qualified. Here’s an example:
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\msbuild /target:BUILD ./[path]/[project file]


6. Write a batch script to deploy the database project using OSQL. Here’s an example:
OSQL -E -S[server]\[instance] -i .\[path]\[SQL Script].sql


7. If it both batch files test correctly, then merge them into a single batch file and make it nice by adding logging and parameterize paths and server names.

8. Run the process by manually until you are confident in it, then execute regularly through Windows Scheduler.

Extra credit: consider how more straight forward this would be on any *nix system.