HL2 stolen code delay
This is really really old news. I already have RC 9, and I've had it for weeks. .
This topic was started by Old_Fart,
Well, I guess it was inevitable. It would seem that corporate (software) America now has a brand new excuse for delaying the delivery of their software products. Here's the *new and improved* excuse ---
Company A is under contract to Company B to deliver bug-free software on 12/12/2003. However, Company A is SO far behind in their development schedule that they KNOW they'll never meet the time table. So, they need a new excuse (since they've already used up all the good ones).
Let's pretend for a minute that it's VERY simple to fake a network "break-in" and even simpler to just "claim" you had a network break-in. Let's also assume that you pretend that someone stole your source code and that it was the ONLY copy you had(?). Now you can go back to Company B and say that you need another 6-8 months to finish the product because someone broke into your network and swiped all (or at least part of) your work (uh huh) and that you have to rewrite it all because now somebody else has it. Make sense so far?
Let's also assume that since Company B is funding this project to the tune of MILLIONS of dollars, that they're REALLY stupid and accept Company A's excuse for a 6-8 month delay in shipment. This keeps Company A programmers and project managers (not to mention company B's project managers) employed for another 6 to 8 months (minimum). And after all, that's really what this is all about (staying employed).
If you believe the "rumors" (because that's ALL we've been reading is rumors) then Valve is THE most inept company on the face of this (or any other) planet. How can a company who stands to lose millions of dollars allow their ONLY copy of the source code to reside on a single computer that someone could hack into from outside their network? Does this make any sense to ANYONE?!
Was the code deleted and is now forever lost? I don't think so because it's all over the Internet. Does it still compile into an executable with necessary support files to demonstrate that it really IS a functioning product? Well, maybe that's the problem to begin with. Let me elaborate. Get yourself a soda or a cup of coffee or a beer because this is going to take a few minutes to explain.
I've worked for a number of large local and international companies as well as government agencies as both a software developer and a CM for about the past 25 years or so. During that time I've NEVER used the excuse, "Gee. I need more time! I lost all my code because somebody stole it through the network and it was the ONLY copy I had." I've used other excuses but never that one. Why? Because it's just about THE dumbest excuse I've ever heard and there are FAR better excuses to choose from (like my dog Barney at it, for one).
There are software products known as RCS and VCS that are basically Revision/Version Control Systems. They are used in conjunction with product testing, reporting (bugs) and repairing (software) tracking systems (this is what is also known as QA or Quality Assurance) to keep track of the various permutations and revisions of software through it's development cycle. It is used so that as changes are made and bugs are fixed, you have a history of all those fixes and changes and can roll back (if necessary) to a ANY previous build.
Typically, the RCS systems are installed on THE safest (meaning best isolated) computer system available to a company and backups are kept either in a vault or off site (or both). Backups are typically made at around 1:00 AM and are fully automated. A single individual is responsible for maintaining that system and it's integrity as well as performing the builds. This guy or gal is known as the CM (Configuration Manager) and is the single line of contact for the folks who actually write the code as well as the project manager (for build status) and the QA folks (so they know when a build is ready to test).
How it works --- Bob the programmer is working on a piece of code (let's call it the GUI). During it's inception, Bob tells the CM that the code is being written and that he needs to check it into CMS (Configuration Management System or RCS). Mary (the CM) says, "No problem" and creates the group (if it hasn't already been created). Bob logs into the CM system and checks in his code. We're talking about a system that requires a login (user name and password) and is VERY difficult to hack into.
So, Bob the programmer now has his code in CMS. What's next? Well, when Bob is ready to make changes to his code (add new GUI elements, for example), he has to log back into CMS and check out his code (literally). In most cases he also has to give a written reason WHY he's checking the code out ("adding new GUI elements"). He now has a local copy (on his computer) of his code that is "official". Typically it also includes support files (header files) and is not necessarily a single file (bobs_code.c for example). So, Bob also has to check out the header files as well (bobs_code.h, bobs_code.dll, etc.). What does this accomplish?
Well, to begin with it "locks" that version of Bob's code so nobody else can make changes to it until Bob checks it back in again (when he's finished with it). Since this process insures that nobody else can make changes to Bob's code (at least until Bob is done with it), in that respect, it's a safeguard. It also provides a chronological history of WHEN every change was made. This is extremely important when you're trying to find someone to blame for a bad build.
So, Bob makes his changes and checks his code back into CMS along with a comment that says, "Added a Save Game button and functional support code". Bob either tells Mary the CM that it's ready to build or simply sends her an email to that effect.
In the mean time, the other programmers are doing the same thing (checking out their code, making changes and then checking it back in). These changes are based on a requirements matrix (what the code is SUPPOSE to do) as well as on the time-line or development schedule (Bob works on the GUI from July 10-July 30, for example). So, as each piece of the code is written, the project manager can mark it off his Pert chart (a method of defining a schedule for a project) and show Company B (whoever is actually PAYING the bills) that his team is making progress ... in which case they (Company A) continue to get paid for their work.
Mary the CM now has several new versions of code from several programmers checked in and Tom, the project manager, says, "Go ahead and build it" ... which she does. If the build succeeds without error (meaning the code compiled successfully), Mary writes a little report saying that the code is ready for testing and that it includes the following changes/additions (blah blah blah).
If the build fails, she looks at the log file(s) and figures out that Bob screwed the pooch and wrote some bad code. She sends Bob an email along with his section of the log file that shows where the code failed to compile. Bob now has to check out his code again, fix it and then check it back in and report back to Mary that it's NOW ready to build. Bob also gets one checkmark in the "wrote bad code" category for his next salary review (which in Bob's case happens WAY too frequently).
And so the process continues. The testers get the freshly completed build and test it's new functionality. The test plan they are following is also based on the requirements matrix and specific test procedures are written so that every new function is tested in every way possible. The testers then write their reports (via a similar tracking system) and submit them. The lead tester or QA guy then submits his report to the project manager as to how well (or poorly) the new build works. If it works well, the project manager can check off another box on his Pert chart, report back to Company B that things are going well and then wait for his next paycheck to arrive in the mail.
This continues on until one day (drum roll please) they're done! They then have a "final acceptance test" (which is again based on the requirements matrix) where they sit down with the "customer" (Company B people) and demonstrate that everything they asked for, they got (or not). If something doesn't work as expected, it goes into another report and the two companies then sit down and hash out what's left to do and whether or not it's part of the original agreement (the contract). If it's NOT part of the original agreement, then Company A can ask for more money (the good part!). If it IS part of the original agreement (Booo! Hisss!) then Company A is "on the hook" (that's the technical term that means they have to do it for FREE now) to deliver the goods on their own dime.
The final result is a piece of software that either meets the acceptance criteria set forth by Company B in the requirements matrix ... or it doesn't. If it doesn't, somebody just lost their job. If it DOES, everybody makes money.
During this entire process, Mary the CM is making backups (to tape or CD-RW or DVD ... whatever) and storing that media either off-site somewhere (there are actually media storage companies that specialize in this business) or in a fire-proof vault (a safe) or locked file cabinet ... somewhere that is NOT accessible to anyone but her and the project manager. Wherever it is kept, it SHOULD be in a fire proof container because you never know when Bob will decide he's had enough and try an torch the whole place.
The NICE thing about the whole CM process is that changes to the code are tracked so closely and completely that you can literally roll back to ANY version of the software at any given time. This allows the programmers to "un-break" their code should they at some stage write such horrendously bad code that it completely screws the application. They can then have Mary the CM delete that particular revision of their code and roll back to an older, more robust version that DOESN'T break the application. That's what revision control systems are all about and WHY so many customers REQUIRE that they be used.
The BAD thing about all this is that for Bob to actually be able to do his job, he also needs ALL the code (or least a large part of it) along with the compiler and a version of the "make" (.mak) file, the "make" utility and everything else necessary for him to compile the application locally on his machine so he can test it BEFORE he checks it into CMS. Bob's "local" version may or may not be complete though but if it is, he can very easily burn it to a CD and distribute it to whoever he likes (which is typically how Beta versions of software end up on the Internet).
So, you now know the entire process from start to finish. You tell me. How on earth can Valve justify another 6-8 months of work based on the "disappearance" of one particular version of the code when, if they had a SINGLE brain amongst them, they SHOULD be using some form of revision control/CMS software to ensure that at NO time can they lose even the smallest, most insignificant file in the entire application? This is NOT outrageously expensive software (revision control/CMS software, that is) by the way. It's fairly inexpensive.
Surely they are tracking their changes to the software. Right? Surely they have backups. Right? We're talking about a game here that ATi paid $6 MILLION dollars for just the right to distribute. Aren't we? And in ALL that's being reported by Valve and the media, does ANY of it seem reasonable to ANYONE? I really don't have any "theories" as to WHY Valve is doing what they're doing but frankly, it just sounds to me like they're stalling for more time because they DON'T have the guts to admit they're not as far along (in the development cycle) as they SHOULD be at this stage of the game ... which is too bad because I was really looking forward to seeing what it (the game) has to offer.
We are beset throughout life by hardships and disappointments. In the end it's who we are and how we face those disappointments that makes us human. We either stomp our feet and make pained faces or we accept the inevitable and learn to live with what has just transpired. In so doing, we become better men and women and learn to deal with our fate far better, and hopefully wiser, than we would have a day before. This is how we grow and find the truth and wisdom in those things we hold near and dear to our hearts. If we don't, we're just knuckle-heads.
Later.
Company A is under contract to Company B to deliver bug-free software on 12/12/2003. However, Company A is SO far behind in their development schedule that they KNOW they'll never meet the time table. So, they need a new excuse (since they've already used up all the good ones).
Let's pretend for a minute that it's VERY simple to fake a network "break-in" and even simpler to just "claim" you had a network break-in. Let's also assume that you pretend that someone stole your source code and that it was the ONLY copy you had(?). Now you can go back to Company B and say that you need another 6-8 months to finish the product because someone broke into your network and swiped all (or at least part of) your work (uh huh) and that you have to rewrite it all because now somebody else has it. Make sense so far?
Let's also assume that since Company B is funding this project to the tune of MILLIONS of dollars, that they're REALLY stupid and accept Company A's excuse for a 6-8 month delay in shipment. This keeps Company A programmers and project managers (not to mention company B's project managers) employed for another 6 to 8 months (minimum). And after all, that's really what this is all about (staying employed).
If you believe the "rumors" (because that's ALL we've been reading is rumors) then Valve is THE most inept company on the face of this (or any other) planet. How can a company who stands to lose millions of dollars allow their ONLY copy of the source code to reside on a single computer that someone could hack into from outside their network? Does this make any sense to ANYONE?!
Was the code deleted and is now forever lost? I don't think so because it's all over the Internet. Does it still compile into an executable with necessary support files to demonstrate that it really IS a functioning product? Well, maybe that's the problem to begin with. Let me elaborate. Get yourself a soda or a cup of coffee or a beer because this is going to take a few minutes to explain.
I've worked for a number of large local and international companies as well as government agencies as both a software developer and a CM for about the past 25 years or so. During that time I've NEVER used the excuse, "Gee. I need more time! I lost all my code because somebody stole it through the network and it was the ONLY copy I had." I've used other excuses but never that one. Why? Because it's just about THE dumbest excuse I've ever heard and there are FAR better excuses to choose from (like my dog Barney at it, for one).
There are software products known as RCS and VCS that are basically Revision/Version Control Systems. They are used in conjunction with product testing, reporting (bugs) and repairing (software) tracking systems (this is what is also known as QA or Quality Assurance) to keep track of the various permutations and revisions of software through it's development cycle. It is used so that as changes are made and bugs are fixed, you have a history of all those fixes and changes and can roll back (if necessary) to a ANY previous build.
Typically, the RCS systems are installed on THE safest (meaning best isolated) computer system available to a company and backups are kept either in a vault or off site (or both). Backups are typically made at around 1:00 AM and are fully automated. A single individual is responsible for maintaining that system and it's integrity as well as performing the builds. This guy or gal is known as the CM (Configuration Manager) and is the single line of contact for the folks who actually write the code as well as the project manager (for build status) and the QA folks (so they know when a build is ready to test).
How it works --- Bob the programmer is working on a piece of code (let's call it the GUI). During it's inception, Bob tells the CM that the code is being written and that he needs to check it into CMS (Configuration Management System or RCS). Mary (the CM) says, "No problem" and creates the group (if it hasn't already been created). Bob logs into the CM system and checks in his code. We're talking about a system that requires a login (user name and password) and is VERY difficult to hack into.
So, Bob the programmer now has his code in CMS. What's next? Well, when Bob is ready to make changes to his code (add new GUI elements, for example), he has to log back into CMS and check out his code (literally). In most cases he also has to give a written reason WHY he's checking the code out ("adding new GUI elements"). He now has a local copy (on his computer) of his code that is "official". Typically it also includes support files (header files) and is not necessarily a single file (bobs_code.c for example). So, Bob also has to check out the header files as well (bobs_code.h, bobs_code.dll, etc.). What does this accomplish?
Well, to begin with it "locks" that version of Bob's code so nobody else can make changes to it until Bob checks it back in again (when he's finished with it). Since this process insures that nobody else can make changes to Bob's code (at least until Bob is done with it), in that respect, it's a safeguard. It also provides a chronological history of WHEN every change was made. This is extremely important when you're trying to find someone to blame for a bad build.
So, Bob makes his changes and checks his code back into CMS along with a comment that says, "Added a Save Game button and functional support code". Bob either tells Mary the CM that it's ready to build or simply sends her an email to that effect.
In the mean time, the other programmers are doing the same thing (checking out their code, making changes and then checking it back in). These changes are based on a requirements matrix (what the code is SUPPOSE to do) as well as on the time-line or development schedule (Bob works on the GUI from July 10-July 30, for example). So, as each piece of the code is written, the project manager can mark it off his Pert chart (a method of defining a schedule for a project) and show Company B (whoever is actually PAYING the bills) that his team is making progress ... in which case they (Company A) continue to get paid for their work.
Mary the CM now has several new versions of code from several programmers checked in and Tom, the project manager, says, "Go ahead and build it" ... which she does. If the build succeeds without error (meaning the code compiled successfully), Mary writes a little report saying that the code is ready for testing and that it includes the following changes/additions (blah blah blah).
If the build fails, she looks at the log file(s) and figures out that Bob screwed the pooch and wrote some bad code. She sends Bob an email along with his section of the log file that shows where the code failed to compile. Bob now has to check out his code again, fix it and then check it back in and report back to Mary that it's NOW ready to build. Bob also gets one checkmark in the "wrote bad code" category for his next salary review (which in Bob's case happens WAY too frequently).
And so the process continues. The testers get the freshly completed build and test it's new functionality. The test plan they are following is also based on the requirements matrix and specific test procedures are written so that every new function is tested in every way possible. The testers then write their reports (via a similar tracking system) and submit them. The lead tester or QA guy then submits his report to the project manager as to how well (or poorly) the new build works. If it works well, the project manager can check off another box on his Pert chart, report back to Company B that things are going well and then wait for his next paycheck to arrive in the mail.
This continues on until one day (drum roll please) they're done! They then have a "final acceptance test" (which is again based on the requirements matrix) where they sit down with the "customer" (Company B people) and demonstrate that everything they asked for, they got (or not). If something doesn't work as expected, it goes into another report and the two companies then sit down and hash out what's left to do and whether or not it's part of the original agreement (the contract). If it's NOT part of the original agreement, then Company A can ask for more money (the good part!). If it IS part of the original agreement (Booo! Hisss!) then Company A is "on the hook" (that's the technical term that means they have to do it for FREE now) to deliver the goods on their own dime.
The final result is a piece of software that either meets the acceptance criteria set forth by Company B in the requirements matrix ... or it doesn't. If it doesn't, somebody just lost their job. If it DOES, everybody makes money.
During this entire process, Mary the CM is making backups (to tape or CD-RW or DVD ... whatever) and storing that media either off-site somewhere (there are actually media storage companies that specialize in this business) or in a fire-proof vault (a safe) or locked file cabinet ... somewhere that is NOT accessible to anyone but her and the project manager. Wherever it is kept, it SHOULD be in a fire proof container because you never know when Bob will decide he's had enough and try an torch the whole place.
The NICE thing about the whole CM process is that changes to the code are tracked so closely and completely that you can literally roll back to ANY version of the software at any given time. This allows the programmers to "un-break" their code should they at some stage write such horrendously bad code that it completely screws the application. They can then have Mary the CM delete that particular revision of their code and roll back to an older, more robust version that DOESN'T break the application. That's what revision control systems are all about and WHY so many customers REQUIRE that they be used.
The BAD thing about all this is that for Bob to actually be able to do his job, he also needs ALL the code (or least a large part of it) along with the compiler and a version of the "make" (.mak) file, the "make" utility and everything else necessary for him to compile the application locally on his machine so he can test it BEFORE he checks it into CMS. Bob's "local" version may or may not be complete though but if it is, he can very easily burn it to a CD and distribute it to whoever he likes (which is typically how Beta versions of software end up on the Internet).
So, you now know the entire process from start to finish. You tell me. How on earth can Valve justify another 6-8 months of work based on the "disappearance" of one particular version of the code when, if they had a SINGLE brain amongst them, they SHOULD be using some form of revision control/CMS software to ensure that at NO time can they lose even the smallest, most insignificant file in the entire application? This is NOT outrageously expensive software (revision control/CMS software, that is) by the way. It's fairly inexpensive.
Surely they are tracking their changes to the software. Right? Surely they have backups. Right? We're talking about a game here that ATi paid $6 MILLION dollars for just the right to distribute. Aren't we? And in ALL that's being reported by Valve and the media, does ANY of it seem reasonable to ANYONE? I really don't have any "theories" as to WHY Valve is doing what they're doing but frankly, it just sounds to me like they're stalling for more time because they DON'T have the guts to admit they're not as far along (in the development cycle) as they SHOULD be at this stage of the game ... which is too bad because I was really looking forward to seeing what it (the game) has to offer.
We are beset throughout life by hardships and disappointments. In the end it's who we are and how we face those disappointments that makes us human. We either stomp our feet and make pained faces or we accept the inevitable and learn to live with what has just transpired. In so doing, we become better men and women and learn to deal with our fate far better, and hopefully wiser, than we would have a day before. This is how we grow and find the truth and wisdom in those things we hold near and dear to our hearts. If we don't, we're just knuckle-heads.
Later.
Participate on our website and join the conversation
This topic is archived. New comments cannot be posted and votes cannot be cast.
Responses to this topic
They didn't lose their source code, it was copied. It must now be rewritten or at least verified that there are no exploits (security, cheating, or otherwise) that can be reverse engineered (remember the ut debacle, where there was a huge vulnerability to client machines, if you don't think there's a risk).
But 6-8 months? Huge!