We had a project in an old but perfectly serviceable framework. The application was working fine, we just wanted some basic regular maintenance, and corporate decided we should outsource it as we didn't have a lot of time between us.
We gave the external team explicit instructions that they should continue to use the existing framework, as they'd asked to rewrite it in a newer framework. Just add simple features and maintain what's there.
The project comes back very troubled, barely working and just feels janky. Things that have worked fine for a decade are broken. None of us look at the code, as that was the goal, but instead we just keep sending back revisions. Every time they fix something, something else breaks.
Well after multiple rounds of back and forth failing to get a very basic form working correctly, we decide to dig in and fix it ourselves. We discover that instead of using the existing framework, they'd written a giant janky adapter layer translating their framework of choice to satisfy the existing framework. It completely undermined the point of keeping the existing framework which was to keep changes to a minimum. We wanted maintenance, not a rewrite.
The whole codebase was a confused mess no person in their right mind would want to maintain. I have never been so frustrated in my life. After we confronted them about it, and they adamantly defended it, we ended up firing them.
This is a common example of not fixing from root cause and try to fix from "outside valiation" that has bad side effects.
The correct way of fixing SQL injection is to use prepared statement and parameters.
Other examples: Windows allows software to do bad things, having no proper permission control (to maintain compatibility). Antimalwares scan applications by matching patterns of virus code, but has many false positives and false negatives. This causes many troubles (kill innocent software, scanning cost performance, etc.) because it does not fix from root case (proper permission management).
Can you say more about proper permission management?
If we are talking about ransomware running in a user context, it'd have the permissions of the user to encrypt anything the user has access to.
If we are talking about extreme sandboxing, you make it hard for programs to work together without permission fatigue, or the user having no idea what they are allowing or getting used to allowing all permissions.
Escaping isn't always straightforward. Or rather, it is in simple languages or in languages that are designed to make it straightforward, like HTML, but in SQL it's surprisingly tricky, and subtle bugs in escaping routines are an occasional source of vulnerabilities. E.g., https://stackoverflow.com/a/12118602. This is why modern best security practice is to use parameterized statements instead.
Php users tried with addslashes(), realized there are cases it can't handle, made a sql variant in mysql_escape_string, realized it's open for abuse since you can mess with the character set. Then made mysql_real_escape_string and later mysqli_real_escape_string, which even them have some flaws depending on the db charset.
So if you find the concept easy, I'd wager it's because you don't handle some exploit path.
Doing your own escaping is digital whack-a-mole. Let the experts who wrote the prepared statement interface handle it. The knowledge of a team and/or years of experience compressed into an interface that’s trivial to use.
It's more common than you'd think, even today. A lot of sites I recently explored leave SQL injections as is (you can see the typical MySQL errors) and rely on some kind of "security plugin" provided by a third-party for their framework of choice which checks if a URL contains something which resembles an SQL injection attempt (such as "UNION SELECT" in query params).
There is a Russian proverb (hi Mr Reagan!) which states that a cheapskate pays twofold. I suspect that the cost of this "overseas" project could easily cost 20x the low, low sticker price.
I have old paperwork for significant shareholdings in 3 extinct companies I worked at that tried to outsource all development. Out of 6 or 7 major outsourced projects I was involved in or responsible for, only one could be classified as "successful", a couple more ended up with somewhat usable code/systems that met requirements (mainly due to them being poorly written) but which were unmaintainable and replaced within 12-18 month timeframes. The rest were all complete throwaways and represent low 7 figures worth of money completely wasted (with, perhaps, the exception that I and others learned new ways that outsourcing can go wrong and a bunch of useful war stories.)
As I see it, when (most) companies have an in house dev team, what they _actually_ have but do not understand (at senior management levels) is a Solution Architecture and System Design team, a software development team, and a QA and Test team - all of which are likely to be the same people who do not have those roles listed on any org chart or job description.
Realistically, the best you can possibly hope for is to outsource the non team lead parts of the software development, and _maybe_ some of the testing work (if your in house QA is on top of things).
The "50% cheaper" off shore dev team is, in my experience, at best capable of doing something under half of what a typical in house dev team does. Given that the management and oversight of the off shored development and testing work needs to be done in house, and cannot possibly be done in the company's best interest by the offshore devs or an outsourcing company, you are going to need to retain in house staff to do those roles - and they're going to need to be the more experienced and more senior people from your existing in house team.
Anybody who thinks "half the hourly rate" translates to "half the cost for the entire project" has clearly never done it before. At best, you are going to be able to outsource 50% of the work. So at best you can save perhaps 25% of the development costs, and that requires you to have some very good inhouse technical skill who are experienced in system design and architecture, writing unambiguous requirement docs and User Acceptance Tests, and who have seen the sort of "tricks" outsourced developers do to pass tests instead of actually writing secure stable and maintainable systems.
(This has nothing to do with the post, but the title is so similar that I had to include it. Written a few days after seeing "Inception".
Inception Rejection
(Why the dreams-within-dreams in the movie "Inception" could never happen as shown even if the technology worked as described.)
((Though this would have been a lot easier to do as an essay, the poeming was challenging and fun.))
The basis of "Inception",
although it may leave you confused,
is that in the brain while waking
only five percent is used.
To process things in daily life
this certainly has been plenty.
That mental surplus means our dreams
go faster by a factor of twenty.
The magic device that drives the film
(the idea's at least sixty years old)
allows dreams not only to be observed
but changed as they they unfold.
When this device is dreamt of,
unlikely as it seems,
if used like in the real world,
the result is dreams within dreams.
Inception's filled with dreams in dreams,
each twenty times faster than before.
Unfortunately, here's the problem
this movie does ignore:
Level one's dream factor is twenty;
four hundred at level two.
Level three's factor's eight thousand -
two hours there is less than a second for you.
In the first dream at twenty times
the brain goes at full speed;
there's no excess capacity
that the next dream down would need.
A dream in a dream can only be dreamt
by the real brain at the top.
The faster brain that's in the dream
is no more than a prop.
To go faster by four hundred,
the dream at level two
would need a brain twenty times as fast
as the one you carry with you.
So the speed of the dreams that are further down
could be no faster than the dream that's first.
A quite ingenious plot device
here has its bubble burst.
We had a project in an old but perfectly serviceable framework. The application was working fine, we just wanted some basic regular maintenance, and corporate decided we should outsource it as we didn't have a lot of time between us.
We gave the external team explicit instructions that they should continue to use the existing framework, as they'd asked to rewrite it in a newer framework. Just add simple features and maintain what's there.
The project comes back very troubled, barely working and just feels janky. Things that have worked fine for a decade are broken. None of us look at the code, as that was the goal, but instead we just keep sending back revisions. Every time they fix something, something else breaks.
Well after multiple rounds of back and forth failing to get a very basic form working correctly, we decide to dig in and fix it ourselves. We discover that instead of using the existing framework, they'd written a giant janky adapter layer translating their framework of choice to satisfy the existing framework. It completely undermined the point of keeping the existing framework which was to keep changes to a minimum. We wanted maintenance, not a rewrite.
The whole codebase was a confused mess no person in their right mind would want to maintain. I have never been so frustrated in my life. After we confronted them about it, and they adamantly defended it, we ended up firing them.
This is a common example of not fixing from root cause and try to fix from "outside valiation" that has bad side effects.
The correct way of fixing SQL injection is to use prepared statement and parameters.
Other examples: Windows allows software to do bad things, having no proper permission control (to maintain compatibility). Antimalwares scan applications by matching patterns of virus code, but has many false positives and false negatives. This causes many troubles (kill innocent software, scanning cost performance, etc.) because it does not fix from root case (proper permission management).
Can you say more about proper permission management?
If we are talking about ransomware running in a user context, it'd have the permissions of the user to encrypt anything the user has access to.
If we are talking about extreme sandboxing, you make it hard for programs to work together without permission fatigue, or the user having no idea what they are allowing or getting used to allowing all permissions.
Somehow, escaping is beyond the comprehension of many people, yet I find it a simple and straightforward concept.
Escaping isn't always straightforward. Or rather, it is in simple languages or in languages that are designed to make it straightforward, like HTML, but in SQL it's surprisingly tricky, and subtle bugs in escaping routines are an occasional source of vulnerabilities. E.g., https://stackoverflow.com/a/12118602. This is why modern best security practice is to use parameterized statements instead.
There are so many foot guns, just don't do it.
Php users tried with addslashes(), realized there are cases it can't handle, made a sql variant in mysql_escape_string, realized it's open for abuse since you can mess with the character set. Then made mysql_real_escape_string and later mysqli_real_escape_string, which even them have some flaws depending on the db charset.
So if you find the concept easy, I'd wager it's because you don't handle some exploit path.
Doing your own escaping is digital whack-a-mole. Let the experts who wrote the prepared statement interface handle it. The knowledge of a team and/or years of experience compressed into an interface that’s trivial to use.
Parameterized statements don't actually abstract over escaping; they entirely obviate the need for it, by moving the untrusted data out of band.
It's more common than you'd think, even today. A lot of sites I recently explored leave SQL injections as is (you can see the typical MySQL errors) and rely on some kind of "security plugin" provided by a third-party for their framework of choice which checks if a URL contains something which resembles an SQL injection attempt (such as "UNION SELECT" in query params).
There is a Russian proverb (hi Mr Reagan!) which states that a cheapskate pays twofold. I suspect that the cost of this "overseas" project could easily cost 20x the low, low sticker price.
Been there, done that.
I have old paperwork for significant shareholdings in 3 extinct companies I worked at that tried to outsource all development. Out of 6 or 7 major outsourced projects I was involved in or responsible for, only one could be classified as "successful", a couple more ended up with somewhat usable code/systems that met requirements (mainly due to them being poorly written) but which were unmaintainable and replaced within 12-18 month timeframes. The rest were all complete throwaways and represent low 7 figures worth of money completely wasted (with, perhaps, the exception that I and others learned new ways that outsourcing can go wrong and a bunch of useful war stories.)
As I see it, when (most) companies have an in house dev team, what they _actually_ have but do not understand (at senior management levels) is a Solution Architecture and System Design team, a software development team, and a QA and Test team - all of which are likely to be the same people who do not have those roles listed on any org chart or job description.
Realistically, the best you can possibly hope for is to outsource the non team lead parts of the software development, and _maybe_ some of the testing work (if your in house QA is on top of things).
The "50% cheaper" off shore dev team is, in my experience, at best capable of doing something under half of what a typical in house dev team does. Given that the management and oversight of the off shored development and testing work needs to be done in house, and cannot possibly be done in the company's best interest by the offshore devs or an outsourcing company, you are going to need to retain in house staff to do those roles - and they're going to need to be the more experienced and more senior people from your existing in house team.
Anybody who thinks "half the hourly rate" translates to "half the cost for the entire project" has clearly never done it before. At best, you are going to be able to outsource 50% of the work. So at best you can save perhaps 25% of the development costs, and that requires you to have some very good inhouse technical skill who are experienced in system design and architecture, writing unambiguous requirement docs and User Acceptance Tests, and who have seen the sort of "tricks" outsourced developers do to pass tests instead of actually writing secure stable and maintainable systems.
All your injections are belong to us.
At least they didn't offer to "correct" the offending text, turning it into a clbuttic bug.
Ah yes, it would seem little "Bobby Tables"[0] strikes again.
0 - https://xkcd.com/327/
(This has nothing to do with the post, but the title is so similar that I had to include it. Written a few days after seeing "Inception".
Inception Rejection
(Why the dreams-within-dreams in the movie "Inception" could never happen as shown even if the technology worked as described.)
((Though this would have been a lot easier to do as an essay, the poeming was challenging and fun.))
The basis of "Inception", although it may leave you confused, is that in the brain while waking only five percent is used.
To process things in daily life this certainly has been plenty. That mental surplus means our dreams go faster by a factor of twenty.
The magic device that drives the film (the idea's at least sixty years old) allows dreams not only to be observed but changed as they they unfold.
When this device is dreamt of, unlikely as it seems, if used like in the real world, the result is dreams within dreams.
Inception's filled with dreams in dreams, each twenty times faster than before. Unfortunately, here's the problem this movie does ignore:
Level one's dream factor is twenty; four hundred at level two. Level three's factor's eight thousand - two hours there is less than a second for you.
In the first dream at twenty times the brain goes at full speed; there's no excess capacity that the next dream down would need.
A dream in a dream can only be dreamt by the real brain at the top. The faster brain that's in the dream is no more than a prop.
To go faster by four hundred, the dream at level two would need a brain twenty times as fast as the one you carry with you.
So the speed of the dreams that are further down could be no faster than the dream that's first. A quite ingenious plot device here has its bubble burst.