Wednesday, December 15, 2010

The “next to the last “ date that we have all been waiting for…

 

We are not quite to the date we’ve been all waiting for.  That would be both the RTW and RTM dates, but this one is almost more exciting!!!

Dynamics CRM 2011 is now officially in RC !!!  (Release Candidate)

This is a HUGE milestone and very, very exciting.

For the official announcement, and links to all the RC installers, see Microsoft’s Blog.

To sign up for a beta site for the last few weeks, go to the CRM Beta Site.

In just the couple hours I’ve been able to work with some of the RC bits, I am VERY pleased!  Especially with the performance improvements with the Outlook Client.

Happy Playing.

Robert
- One is pleased to be of service

 

Monday, November 29, 2010

Dynamics CRM Advanced Performance Optimizations

 

Introduction

Over the course of years, I have had the honor and opportunity of working on both some very large user-count and very high volume Dynamics CRM projects.

Inevitably, these projects encounter performance problems as they move to production due to the increased load required from the systems. Sometimes, I’m around before the project goes into production and we can pro-actively plan for and implement the needed optimizations. Many times I’m called in after the system has already crashed several times while in production.  This is not a good place to end up!

Below are suggested optimizations a large-seat and/or high-volume Dynamics CRM deployment should consider implementing.

CRM 2011

Now that Dynamics CRM 2011 OnPremise has been released, I have had several inquiries as to the applicability of the optimization guidance currently suggested by Microsoft and supplemented here. I am excited to write that ALL of the existing optimization guidance is still wholly applicable!  In addition, Microsoft has released the first benchmark on CRM 2011 demonstrating 150,000 concurrent users.  I have updated the chart below with the new benchmarks.

Step 1: Understand the Baseline Benchmarks

First, let’s understand what’s possible for a Dynamics CRM environment.  There are now currently several benchmarks that have been performed with Dynamics CRM to flesh out the recommended hardware and also demonstrate the significant scalability of the product.  I strongly suggest at least a moderate review of each of the following documents and a thorough review of those that are closest to your environment and needs.

Microsoft Dynamics CRM 4.0 Suggested Hardware for Deployments of up to 500 Concurrent Users

Describes general hardware sizing information that will support Microsoft Dynamics CRM version 4.0 with up to 500 concurrent users in a single deployment on-premise model.

500 Concurrent Users; 42,223 web requests / hour; 6,427 business transactions / hour

Microsoft Dynamics CRM 4.0 Performance and Scalability

 

This download includes the following four papers:

1.     Microsoft Dynamics CRM_4_0_Enterprise Performance and Scalability.pdf

 

2.     Microsoft Dynamics CRM_4_0_Performance and Scalability _Users.pdf

 

3.     Microsoft Dynamics CRM4_0_Performance and Scalability_Database.pdf

 

4.     Microsoft Dynamics CRM4_0_Performance and Scalability_Network.pdf

 

Microsoft, together with Unisys Corporation, completed benchmark testing of Microsoft Dynamics CRM 4.0 running on Microsoft® Windows Server® 2008 operating system and Microsoft SQL Server® 2008 database software. Benchmark results demonstrate that Microsoft Dynamics CRM can scale to meet the needs of an enterprise.

Microsoft Dynamics CRM_4_0_Enterprise Performance and Scalability.pdf

Overview and links to more resources

 

Microsoft Dynamics CRM_4_0_Performance and Scalability _Users.pdf

24,000 users; 1,051,921 web requests / hour; 169,344 business transactions / hour

 

Microsoft Dynamics CRM4_0_Performance and Scalability_Database.pdf

1500 users; 1.03 Billion records; 1.3 TB database

 

Microsoft Dynamics CRM4_0_Performance and Scalability_Network.pdf

Extensive network statistics

Microsoft Dynamics CRM Performance and Scalability with Intel Processor and Solid-State Drive Technologies

Microsoft, working with Intel Corporation, completed benchmark testing of Microsoft Dynamics CRM 4.0 running on Intel server and solid state drive (SSD) hardware. This white paper focuses on benchmark results associated with user scalability.

50,000 concurrent users; 2.4M web requests / hour; 374,400 business transactions / hour; .12 second average response time

Microsoft Dynamics CRM 4.0 - xRM Application Scalability Study

This paper describes the details and results of a benchmark testing effort around the multi-tenancy and xRM capabilities of Microsoft Dynamics CRM 4.0, the virtualization features of Microsoft Windows Server 2008 R2 Hyper-V, and the enterprise capabilities of Microsoft SQL Server 2008 R2 running on IBM System xSeries hardware with intelligent quad-core Intel Xeon processers and Intel SSDs.

20 LOB xRM applications; 1,000 users EACH LOB Application; 149,760 business transactions / hour; .10 second average response time; only 37.1% SQL Server utilization.

Microsoft Dynamics CRM Performance and Scalability in a Virtual Environment with Hyper-V

 

 

 

Microsoft, working with Intel® Corporation and Dell™ Inc., completed workload test of virtualized Microsoft Dynamics CRM 4.0 on Dell™ PowerEdge™ servers equipped with Intel® Xeon® Processors 7500 Series-based and solid state drives (SSDs).

100,000 concurrent users; 5.1M web requests / hour; 778,000 business transactions / hour; .29 second average response time

Microsoft Dynamics CRM Performance and Scalability on Intel Xeon Processor-based Dell Servers with Solid-State Drives Microsoft, working with Intel® Corporation, completed benchmark testing of Microsoft Dynamics CRM 2011 running on Intel® Xeon® 7500 series processor-based Dell R910 servers with Pliant Technology solid state drives (SSDs). 150,000 concurrent users; 5.5M web requests / hour; 703,080 business transactions / hour; .4 second average response time

Step 2: Apply the Minimal Recommended Optimizations

Microsoft has published a white paper that goes over the minimal set of optimizations that should be taken at every layer of the system from the Client, to the Application and Platform servers, and on to the database itself.  These should be considered mandatory guidance and implemented in every project regardless of size. Note that these are the minimum, and standard optimizations applied for all of the official benchmarks referenced above. Additionally, Microsoft recently released guidance for Client optimization for CRM 2011 with a focus around CRM Online.  Both these documents should be considered mandatory reading and guidance.

Optimizing and Maintaining Microsoft Dynamics CRM 4.0

This white paper details techniques, considerations, and best practices for optimizing and maintaining the performance of Microsoft Dynamics CRM 4.0 implementations.

Optimizing and Maintaining Client Performance for Microsoft Dynamics CRM 2011 and CRM Online

The white paper provides readers with the information necessary to ensure and maintain the optimal performance of the clients connecting to a business solution based on Microsoft Dynamics CRM 2011 or Microsoft Dynamics CRM Online.

Optimizing and Maintaining the Performance of a Microsoft Dynamics CRM 2011 Server Infrastructure This white paper provides information that designed to help readers with achieving and maintaining optimal performance of the server infrastructure supporting a Microsoft Dynamics CRM 2011-based business solution deployed in an on-premises or hosted environment.

Step 3: Apply all the latest Update Rollups and applicable Hotfixes

For a complete matrix of all of the Update Rollups, their release dates, and version numbers, view this blog:

http://rgsiiiya.blogspot.com/2010/09/dynamics-crm-version-matrix.html

You will want to be at the latest possible Rollup possible that fits within your build life-cycle when you go to production.

In addition to the Rollups, you will want to apply any of the manual patches and hotfixes that are applicable to your environment.  A couple must-do manual patches are listed below:

AsyncOperation Cleanup

http://support.microsoft.com/kb/968520

AsyncOperation Auto-cleanup

http://support.microsoft.com/kb/974896

AsyncOperation missing index

http://support.microsoft.com/kb/948843

                                 

A note of caution regarding AsyncOperation Cleanup: If you apply the registry key fixes, (Number 2 above), this will delete ALL your workflow history immediately when a workflow completes.  This prevents you from having any historical view into your workflows.  I actually suggest that you do NOT apply the registry fixes, but instead, use the SQL statement in the above article, (Number 1 above) and add a WHERE condition to only delete records older than n days.  I generally recommend somewhere between 7 to 14 days of history to be kept.

Note: If your AsyncOperation table has never been cleaned up before, you may experience excessive locking during the cleanup process.  You will want to schedule your initial cleanup during a maintenance window.  If you do experience excessive locks, and you may need to schedule it during a down time maintenance window when you can run the database in single-user mode during the initial cleanup and/or stop the AsyncService during the initial cleanup period.

 

Additionally, if once you schedule a regular cleanup, you will want to monitor for excessive locking and take action as appropriate for your environment.

Step 4: Advanced Optimizations

Your CRM environment and performance will improve significantly after applying all of the suggested optimizations discussed in the white paper Optimizing and Maintaining Microsoft Dynamics CRM 4.0 and the manual changes recommended to optimize the AsyncOperations in Step 3 above.

In addition to the Step 2 and Step 3 changes mentioned above, there are some additional advanced optimization steps required for deployments with very large quantity of users and/or very high volume of data transactions. A typical example of these types of systems would be when Microsoft Dynamics CRM is deployed within a call center. Therefore it is strongly recommended that these optimizations below be applied for call centers and similarly high volume deployments.

At the time of writing this paper, the following information has not yet been formalized into a white paper by the Microsoft team. However, almost all of what follows has been fleshed out on several deployments while working very closely with the Microsoft Dynamics CRM support team and encompasses all of their guidance and suggestions from those engagements.

SQL Server Isolation Level: RCSI

Note that the Optimizing and Maintaining Microsoft Dynamics CRM 4.0 white paper also touches on this topic, but it is frequently overlooked or ignored.

Symptom- One or more of the following situations are occurring:

1.       The installation is experiencing excessive transaction locks that are resulting in an extensive backlog of both write and read locks in SQL Server.

2.       The system frequently comes to a complete stop in the mornings when all your users show up for work and log into Dynamics CRM.

3.       You would like to be smart and proactively prevent yourself from getting into these situations.

Solution: Switch the isolation level of SQL Server to RCSI.

RCSI stands for Read Committed Snapshot Isolation.  RCSI affects two SQL Server processes:

1.       Reads will only read already committed data.

2.       Instructs SQL Server to keep a snapshot of the data in the TempDB and to use that snapshot for all Reads.

 

This adjustment to the RCSI will eliminate ALL of the read locks in the system, in addition, the adjustment will also eliminate the quantity of write locks that get backed up behind reads.

The configuration change is very simple to make.  Documentation for setting Isolation levels is located here: http://msdn.microsoft.com/en-us/library/ms173763.aspx

A couple caveats:

RCSI makes HEAVY use of the TempDB.  Therefore, you will want to make the following optimizations on the TempDB.

·         Ensure that the TempDB is on its own spindle or LUN.

·         Start the TempDB file size initially very large. It is suggested to start its size at the largest observed size from an existing production deployment, or at least 25% of the production data size.

·         If you allow for the auto-growth of TempDB, use a fairly large percentage, for example, 20%, otherwise, monitor and manually grow the TempDB when it needs more space.

Make sure that you are running at least Update Rollup 12 or better!  There is a known bug where some Async Jobs and bulk emails will execute multiple times when running in RCSI.  The following KBA was released separately, and as part of Update Rollup 12 which fixes this problem.

Multiple email with RCSI

http://support.microsoft.com/kb/2249156

Update Rollup 12

http://support.microsoft.com/kb/2028381

 

Implementing RCSI is very simple to do and will have a TREMENDOUS positive impact on your deployment.  As mentioned above, make sure you have properly setup the TempDB and you are running at least Update Rollup 12 or better.  You will be VERY PLEASED with the performance gains from these changes!

Manual data and log file size growth

By default, SQL Server sets up the data and log files to auto-grow by a certain percentage. Frequently, I observe DBAs increasing the percentage to help minimize the impact of the growth cycle.

However, there is even one step further you should take into consideration to completely manage that growth cycle.

It is suggested that you monitor the data and log file’s “used space” proactively with SCOM or other monitoring tool and send out an alert once the used space reaches a stated ceiling.  I suggest usually somewhere between 80% - 90%.

Subsequently, during your normal maintenance window, when usage of the system is minimal, manually (or scripted), increase the size of the data and log files an appropriate amount for your deployment.

Performing the changes mentioned above will add significant advantage to the deployment because it will:

·         Ensures that a file auto-growth cycle happens during low-usage windows and not at the peak of the day when it could cause serious performance impacts.

·         Allows you to proactively monitor your file storage usage and proactively plan for growth and not be stuck in a re-active mode when you all-of-the-sudden run out of disk space.

Fill Factor

I have seen on several occasions where a SQL Server deployment has had all of the index paging fill factors set to zero (0).  Zero is BAD. Zeros will cause very un-optimized index paging allocation and usage.

The recommended setting from the Microsoft Dynamics CRM product support team is that the Fill Factor should be around 80%.  This can, of course, be tuned to your exact deployment.  However, 80% is a great place to start.  You can always change it later.

Note - You will need to re-build all of the existing indexes before they will pick up the new setting.  You will want to schedule this appropriately in a proper maintenance window to minimize the impact to the users.

Parallelism

Everybody loves the concept of parallelism.  After all, isn’t parallelism in a multi-CPU, multi-process, multi-threaded environment always a good thing?!

Well, actually, not always.

The Optimizing and Maintaining Microsoft Dynamics CRM 4.0 white paper touches on this topic but does not provide explicit guidance. However, the section in the white papers should still be consulted.

In practice, once all of the previously mentioned optimizations, especially RCSI, are completed, you are very unlikely to experience any additional CxPacket waits, even with a high degree of parallelism. 

However, IF you still are experiencing a large quantity of CxPacket waits in SQL Server for the Dynamics CRM database, you may be having a problem with parallelism.  If you are, it is suggested that you set parallelism to “1”.

Details on how to do this are located here: http://msdn2.microsoft.com/en-us/library/ms181007.aspx

As always, you can experiment with other values greater than one until you find the best balance for your specific environment.

SQL Server 2008 Specific Optimizations

SQL Server 2008 introduces a whole suite of new features that are directly relevant to a Dynamics CRM deployment.  Things like:

·         Row Compression

·         Page Compression

·         Filtered Indexes

·         Sparse Columns

·         Encryption

Subsequent to the release of SQL Server 2008, the Microsoft Dynamics CRM released guidance on how to best leverage these features for Dynamics CRM.

Improving Microsoft Dynamics CRM performance and Securing Data with Microsoft SQL Server 2008

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b5bb47a4-5ece-4a2a-a9b5-5435264f627d

 

 

One of the most interesting new features that can very significantly improve Dynamics CRM is Filtered Indexes. 

Dynamics CRM uses two tables for every entity:

<entity>Base and <entity>ExtensionBase

And subsequently provides the full set of logical and filtered views that join these tables (and all the other supporting tables) together.

In almost all cases, the indexes on these tables, which are initially optimized to support the views, all have a WHERE clause of:  statecode = 0

As noted in the referenced paper, this creates significant overhead in the indexes and their maintenance and can often result in the indexes being completely bypassed in some situations.

Therefore, it is strongly encouraged to, at the bare minimum, look at adding filtering to existing indexes and/or adding additional filtered views to your deployment.

Refer to the referenced paper for other great optimization opportunities when using SQL Server 2008!

SAN, LUNs, and Spindles

A common scenario has been encountered when working with Dynamics CRM databases on SANs.  As the above guidance has frequently mentioned, it is best practice to ensure that database files, log files, and the TempDB all be on separate LUNs and Spindles.

What is frequently encountered is that, while the SAN administrator is providing separate LUNs, all of the LUNs are on the exact same drive spindle.  This results in an excessive bottleneck at the physical drive level.

When carving out LUNs on a SAN, you will want to make sure that they also spread out over separate physical spindles.

Network Topology

Use smart subnets

The use of well thought out subnets and network segregation can very significantly manage the volume of traffic flowing through the network and keep stray traffic out of areas where it doesn’t belong and just consume bandwidth.

One of the emergency calls that I received was for a customer that was using one single subnet for their entire enterprise.  During peak hours they were experience highly degraded network I/O between the application servers and the database.  All of the user’s Facebook browsing and chat sessions where all passing through on the same wire that was sitting between the application servers and the database.  After properly creating smart subnets and routing rules, the entire problem disappeared!

It is strongly suggested that the datacenter have its own subnet that is segregated from the primary user population.

Use smart switching

On another emergency “My Dynamics CRM is Dead!” call, it was discovered that there were a couple problems with the network:

1)      The datacenter was primarily using broadcast routers and not switches

2)      The few routing rules that existed were not being managed and were routing intra-datacenter traffic half-way around the world and back, just to get to the server in the next rack over, 6 feet away.

It is strongly suggested that you use switches over broadcast routers in the data center.  And, TRIPLE CHECK your routing rules.  Make sure your local data center traffic does not have to take a world tour just to go next door.

Step 5: Ongoing Monitoring and Maintenance

Like ANY database-centric application, Dynamics CRM is a growing, changing, living database. Therefore, it needs ongoing monitoring and maintenance just like any other enterprise database does.

Scale Group Jobs

It is suggested that you set the bulk delete and re-index jobs to a time that is best for your enterprise. By default, the time these take place is the time that CRM was installed, which very likely is not when you want these to happen. You will want to run the Scale Group Editor to adjust the schedule for these to a low-usage time applicable for your environment. 

Microsoft provides a utility for editing these schedules as referenced below.

ScaleGroupJobEditor

Allows for the editing of the schedule for bulk deletes and re-indexing.

Scheduled Maintenance Plans

SQL Server provides a native maintenance facility called Maintenance Plans where you can setup regular maintenance jobs to ensure that the database is always happy and healthy.  It is strongly suggested that you set up an applicable set of maintenance plans!  Some of the items that should be done on a regular basis are:

·         Re-indexing

·         Statistics Update/Generation

Additionally, you should be checking the query/index usage stored procedures looking for new areas where you may not have proper indexes in place.  You will then want to adjust your existing indexes and add any applicable new ones to ensure that the system is operating a full potential.

Quick Find Attributes

As you know, Dynamics CRM allows you to specify a set of attributes that are searched when a user uses the Quick Find feature on each entity.  It is imperative that you select ONLY those fields that are actually needed.  Additionally, you will want to ensure that you have a custom index defined for each Quick Find set of fields!  Failing to do so will usually result in table scans.

Note: When setting up these indexes, it is strongly suggested that you make them “covering” indexes where all of the attributes that are returned in the Quick Find View are included as covering fields in the index.  This will result in extremely fast quick finds in Dynamics CRM!

 

Special Situations

Global Deployments

Issues with latency

Most projects are always initially concerned with band-width.  For most of these projects, this is actually of minimal concern because they already have very healthy and wide Internet and Network pipes already in place.

However, the one item that Dynamics CRM is actually VERY sensitive to, and usually completely overlooked, is latency!

Users who are on low bandwidth and low latency connections will usually have very acceptable performance.

The users with high bandwidth and high latencies will usually have very bad performance.  This is especially true when using SSL.  I have seen situations on high latency connections where just the SSL handshake takes almost 2 seconds!

If you have users in remote locations that have very high latency connections, you will want to look into one or more of the following items:

·         Use Dynamics CRM’s “IFD” deployment, even if it is not actually Internet facing.  An IFD deployment with its forms authentication will reduce the total quantity of round-trips between the browser and the server significantly.

·         Use of WAN Accelerators. These usually provide combinations of compression and caching that can often bring significant performance gains.

·         Use of Citrix or Terminal Services.  In some situations where you are at the complete mercy of poor network connections with poor bandwidth and poor latency, sometimes the best approach is the use of server-side application session hosting like Citrix and Terminal Services. This will reduce the network traffic down to just the keyboard I/O and screen image refreshes.

Conclusion

The purpose of this document has been to provide you with a starting point to assist you with optimizing your Microsoft Dynamics CRM implementation. Please note this document is not all inclusive.  In addition, posted within all the MS CRM Rollups are several manual optimizations that can be applied if your particular implementation is demonstrating the problematic behavior described within that rollup’s documentation.  It is therefore necessary for you to read through the entire rollup document.  There are often gems hidden beneath the small print. 

 

Robert
- One is pleased to be of service

 

Friday, November 26, 2010

Move Point, Move! - Geospatial mapping and how to move a point x miles

 

Bounding Rectangles

I was recently asked to assist in a project that was doing some Geo-mapping with Bing in an extension application to Dynamics CRM.

But, first the disclaimer… I am neither a geometrist nor cartographer, so please be kind if I get some of the minutia details off a little.

The basic functionality that they were trying to accomplish goes something like this:

The user will select a pre-built Advanced Find query for selecting some Accounts. The results show in a standard table grid display.

As the user selects an Account and adds it to their Route, a couple things happen on the Bing map:

a) All of the, thus far, selected Accounts in the route each have their pushpin displayed.

b) All other Accounts, that are owned by the user, that are within a “proximity” to those Accounts should also show on the map, with a different style pushpin to differentiate them. Let’s call these “Candidate Accounts”.

The mechanism that they are using for “proximity” is a rectangular boundary that is 5 miles larger (in all directions) than a rectangle that would be naturally made by the furthest Top, Left, Bottom, Right of the Accounts in the current Route.

Let’s look at a picture that represents the (a) bullet above: Four selected Accounts:

image

Here we have 4 Accounts in our Route (Purple Circles). Notice how a rectangle can be represented by the four points that represent the:

  • Furthest Top (A)
  • Furthest Left (C)
  • Furthest Bottom (D)
  • Furthest Right (A)

Also note that the Account in the top-right corner (A) happens to be both the Furthest Top and the Furthest Right, while the Furthest Left (C) and Furthest Bottom (D) are two different Points.

Now, before we can perform step (b) to show our Candidate Accounts, we have to expand our rectangular area by 5 miles to define our “proximity”.  We want to pick up the Accounts of Interest that are within 5 miles outside of our current purple rectangle.

First, for a quick review of Longitude and Latitude you may want to check our this Wikipedia Article. Since most digital mapping systems can use decimal degrees, we will do everything in decimal degrees and most people can work just fine with decimal numbers.  But, the one thing I see most people stumble over the most, is remembering which measurement is for which direction. So, here’s a simple rule:

  • Latitude is the measurement that runs North and South on the Globe. (Think ‘rhymes with altitude, as in “up and down” as in Vertical.)
  • Longitude is the measurement that runs East and West on the Globe. (Think aLONG the Equator as in Horizontal.)

And, for our geometry review, recall that we only need 2 points on the plane to draw a rectangle.  By convention, this is usually the Top-Left corner and the Bottom-Right corner.  So we will use those two points.

Here are the point coordinates for our four Accounts as mapped above along with the minimums and maximums of those coordinates.

Point Latitude Longitude
- A - 39.75492 -104.84648
- B - 39.74270 -104.94437
- C - 39.71187 -104.97232
- D - 39.69964 -104.86983
     
Min 39.69964 (D) -104.97232 (C)
Max 39.75492 (A) -104.84648 (A)

To calculate our two points, we need to find the minimum and maximum of our data.  The formula for determining our two points is as follows:

  • Top-Left = ( Max(Latitude), Min(Longitude) )
  • Bottom-Right = ( Min(Latitude), Max(Longitude) )

And when we substitute our actual numbers from our table, we end up with:

  • Top-Left = (39.75492, -104.97232)
  • Bottom-Right = (39.69964, -104.84648)

Let’s plot these points to make sure we have the math correct:

image

Now we have the two points that define our original rectangle around our original four Accounts as shown by the red pushpins.

So-far, so-good.  That wasn’t too hard.

But, now we need to move our two points 5 miles in EACH of the four directions: North, South, East, West

Up to know, we have had a mapping lesson, a geometry lesson, and now it’s time for a trigonometry lesson.

Let’s zoom into our Bottom-Right corner:

image

We need to move the Bottom-Right corner both East (side a) and South (side b) by 5 miles. (note that the above picture is a smaller scale of only .5 miles) This means that we need to move the Bottom-Right point along the purple line (side c). 

Distance Between Two Known Points

Does anyone remember the formula? Anyone remember the Pythagorean theorem?  Here we go:

image

And substituting our numbers we get:

image

Well, that sure seems easy enough. The problem lies in the basic question of: how do we convert the unit of miles into longitude-latitude?

This gets even more complicated by the fact that, contrary to popular belief, the earth is not flat!  Realizing that this is still a controversial topic for some, we live on a spherical ball!  And worse, it’s not even a perfect sphere.  It’s a little uneven in spots.  The smart people (of which I’m not) call it an ablate spheroid.

Basically, we are talking about calculating the distance between points on the planet.  There is an excellent, understandable presentation of the math involved over here at Meridian World Data. They present three approaches to the problem.  Rough Approximation, Improved Approximation, Great Circle Distance.

For our requestor’s situation, it was decided to use the Improved Approximation formula:

Improved approximate distance in miles:

        c = sqrt( a * a + b * b )

where a = 69.1 * (lat2 - lat1)
and b = 69.1 * (lon2 - lon1) * cos(lat1/57.3)

Unfortunately, the problem we have with this formula is that it is the difference between two known points.  But, in our situation, we only have ONE known point and we already know the distance (5 miles).

Known Point + Miles = New Point

There is a fantastic and succinct blog explaining how to perform this conversion over at The Endeavour by John D. Cook.

As we mentioned above, let’s assume the earth is a perfect sphere and that at the equator the radius (R) is:

R = 3960.0 miles

And because it is a sphere, a “line” is actually an arc along the circumference of the sphere.

Also of import is the fact that all the math and angles on a sphere are in Radians and/or results are in Radians. But, Longitude and Latitude are all in Decimal Degrees.  Thus, we need to know how to convert between them:

The formulae for converting between Degrees and Radians are:

Degrees = Radian * (180 / PI)

Radians = Degrees * (PI / 180)

Using these formulae, we can define our conversion constants as thus:

RadiansToDegrees = (180 / PI)

DegreesToRadians = (PI / 180)

Moving North and South

The formula for the change in distance North and South along a Latitude is:

DistanceNorthSouth = (MilesToMove / RadiusAtEquator) * RadiansToDegrees

Note that (MilesToMove / RadiusAtEquator) will give us the arc distance in Radians. But, we will need it in Degrees since Longitude and Latitude are in Decimal Degrees. Which is why we have to multiply by RadiansToDegrees to get us to Degrees.

Moving East and West

The formula for the change in distance East and West along a Longitude is more complicated. As we discussed, this is because as we get closer to either of the earth’s poles, the distance that one degree of longitude represents gets smaller and smaller.  In fact, a circle parallel to the equator, is cos(x) times smaller than the circumference at the equator.

To make this easier, let’s do this in two parts.

First, we need to find the radius (r) of the earth at the latitude that we need to move East or West on.

RadiusAtLatitude = RadiusAtEquator * cos( LatitudeDecimalDegrees * DegreesToRadians )

Now, we can use the same basic formula we used for North and South, but now use the radius (r) for where we actually are on the earth:

DistanceEastWest = (MilesToMove / RadiusAtLatitude) * RadiansToDegrees

Move Point, Move!

We are almost at the finish line!

Now that we know how to convert our Miles to a Decimal Degrees, we can now calculate our new points. And, let’s combine all our steps into single formulae to simplify the implementation in your own language.

NewTopLatitide = StartingLatitude + (MilesToMove / 3960) * (180/PI)

NewLeftLongitude = StartingLongitude - (MilesToMove / (3960 * COS(LatitudeDecimalDegrees * (PI/180)))) * (180/PI)

NewBottomLatitude = StartingLatitude – (MilesToMove / 3960) * (180/PI)

NewRightLongitude = StartingLongitude + (MilesToMove / (3960 * COS(LatitudeDecimalDegrees * (PI/180)))) * (180/PI)

Ok, let’s do the math…

Recall that our starting rectangle’s two points are:

  • Top-Left = (39.75492, -104.97232)
  • Bottom-Right = (39.69964, -104.84648)

And when we plug in those points and use 5 miles for our MilesToMove, we end up with these new points:

  • New-Top-Left = (39.82726, -105.06642)
  • New-Bottom-Right = (39.62730, -104.75246)

Which, when we plot on our map, and add some measurement lines to validate our results, we get exactly what we are looking for: A new rectangle that is exactly 5 miles farther out, in all directions, from our original rectangle.

image

Summary

WOW, That was a blast.  We got a review of several math disciplines: Geometry, Algebra, Trigonometry. We got to play with cool pictures of maps and some pretty colors.  What more could a person ask for.

We learned a little bit about how Longitude and Latitude work, and the complications of shrinking distances on Longitude as you move towards either of the earth’s poles.

And ultimately, we were able to answer the question of how to move a (Longitude,Latitude) point on a map a specified distance in miles (5 miles in our case).

I hope that you have had as much fun learning this as I did in my original research!

 

Robert
- One is pleased to be of service