Finding and adjusting the important cost variables in software development is a lot harder than doing the same in old school manufactoring. As a consequence, many organizations that deal in software give up and chose to focus almost exclusively on the one variable that's easy: Developer's Pay. That's too bad because it lets a binded view drives important decisions that are critical in determining the company's performance.
Exclusive focus on developer's pay neglects the big picture in favor of a small corner. Questions on reducing costs become focused entirely on the "Could I hire cheaper developers?" kind. It's based on the falsity that developers with an equal laundry list of skills are roughly interchangable (they both know ASP, so I'll take the one that asks for $500 less a month).
A look at the big picture would include more encompassing questions, such as "How do I get my product for the least amount of resources?". Phrased like that many other variables become at least as important as developer's pay.
Let's just consider one of these variables for now: Performance. Numerous studies have shown that a single developer can be up to ten times more effective than another when timed working on a single task. In the extreme (and over-simplified) example, where developer efficiancy is the only bottleneck, you can afford to pay a grade 10 developer $50,000 a month instead of hiring ten grade 1 developers at $5,000 a month.
That sounds pretty extreme — and it is. The cost of a development project depends on far more than coding efficiancy and design skill. But what if we considered cases with at less than 1000% difference in developer skill? Let's just stay small and say 100%. That's just 1/10 of the theoritical possibilities
Developer A is twice as efficient as developer B. Let's keep it simple and say that this difference includes the plentitude of costs associated with having developers on board -- even the communication and cordination costs (that explode as you add more developers).
A simple cost equation
Consider the cost equation for a simple project. Five type B developers working for $5000 a month for five months: 5B x $5,000 x 5 months = $125,000.
Suppose you were able to replace all the type B developers with type A. You had to pay twice as much, but they would also get done twice as fast, leading to this cost equation: 5A x $10,000 x 2.5 months = $125,000.
Even this incredibly simplified contrast opens a world of variations in between. Hiring half as many type A developers will still get us done to the original deadline. And what if we only had to pay 50% more to get type A instead of type B developers?
Of all the development shops I've worked, all of them had star developers that were at least twice (and likely three or four) times as effective as the worst developers. None of them had a pay-scale that matched that fact. Most had a difference up to 25%. The best developer would get 25% more than the worst, despite being 100-300% more valuable.
Instead of making sure that they only hired star developers (by applying more rigorous hiring procedures) they would all of them argue pay as the primary deciding factor in new hires. A difference of $500 between two developers could easily decide which got hired. Considering a monthly pay of $5,000, the more expensive developer would only have to be 10% more effective to be worth it. 10%. That's just 1/100 of the theoretical potential. And that could be a deal breaker.
In face of the incredible difference between developers, managers should think less of cost of each individual, but at the cost of the total throughput.