also they do all the development in one branch / in the trunk https://arxiv.org/abs/1702.01715 ( I never understood the explanations as to why they do that )
Now the article says that with windows they do branches.
This. Also, I think that the build system itself is just the tip of the iceberg. At least in Google's case it has recently been very nicely documented  that blaze is "just" one piece of how google keeps velocity high
A number of points in the post/article are questionable.
First, it assumes the developers had substantial control over the schedule for the project ("Giving excessive importance to estimates"). Certainly in my experience this is unusual. More frequently, the schedule is dictated by management, frequently by sales/marketing executives in commercial software development. It is very difficult to push back and a good way to lose your job.
Sales: We have closed this great deal with BigCorp. Can you do X (complicated, challenging software project) by the end of the quarter?
Developers: Err, um, X sounds like a project that will take six months.
Sales: We really need to make our quarterly numbers. Our CEO Bob used to be a developer and he says any competent programmer can do it and we only hire the best. Competent doesn’t cut it here! You are a rockstar ninja, aren’t you? Can you prove you can’t do it by the end of the quarter?
Developers: Well, no. The schedules are driven by some unexpected problem or problems that usually happen. But, well, if nothing unexpected happens, we can do it by the end of the quarter.
Sales: Great! Bob is expecting results by the end of the quarter.
So much for the beautiful, elegant software design methodologies taught in college and university CS programs and peddled by high priced consultants.
Second (“Giving no importance to project knowledge”), high technology employers seem to have extremely high turnover rates of software developers and other employers. Payscale produced a study claiming that the average employee tenure and Amazon and Google is only one year. Many companies seem to target employees with more than seven years of paid work experience — Logan’s Run style — for layoffs and “constructive discharge,” (https://en.wikipedia.org/wiki/Constructive_dismissal) where employees are made uncomfortable and quit “voluntarily.” Undoubtedly, this is costly as the author implies, but it seems to be common practice.
Yes, metrics like “issues closed,” “commits per day,” or “lines of code” don’t work very well. Once employees realize they are being tracked and evaluated on some metric, they have a strong motivation to figure out how to manipulate the metric. Even if the employees don’t try to manipulate the metrics, the metrics all have serious weaknesses and map imperfectly to value added (biz speak).
Third, are code reviews and unit testing proven processes especially for normal non-Microsoft companies? In the early days of Test Driven Development (TDD), Kent Beck and his colleagues made numerous claims about the success of Test Driven Development in the Chrysler Comprehensive Compensation System (C3) payroll project, an attempt to create a unified company wide payroll system for Chrysler. This project in fact had a range of problems and was eventually cancelled by Chrysler in 2000, without replacing the Chrysler payroll systems successfully.
As the problems with C3 have become well documented and well known, TDD enthusiasts have shifted to citing studies at Microsoft and some other gigantic companies that claim practices like TDD and code reviews work well. Are these really true or do these case studies have hidden issues as C3 did?
Further, Microsoft, Google, and other companies that have played a big role in promoting these practices are very unusual companies, phenomenally successful super-unicorns with sales in the range of 40-100 billion (with a B) dollars with near monopoly positions and anomalously high revenues and frequently profits per employee. Microsoft claims to have revenues of $732,224 per employee. Google claims an astonishing $1,154,896 per employee. (http://www.businessinsider.com/top-tech-companies-revenue-pe...) This compares to $100-200,000 per employee for most successful companies.
Fergus Henderson at Google recently published an article on Google’s software engineering practices (https://arxiv.org/abs/1702.01715) with the following statements:
2.11. Frequent rewrites
Most software at Google gets rewritten every few years.
This may seem incredibly costly. Indeed, it does consume a large fraction of Google’s resources.
Note: “incredibly costly”
Companies like Microsoft and Google have enormous resources including monopoly power and can follow practices that are extremely costly and inefficient, which may work for them. Even if these practices are quite harmful, they have the resources to succeed nonetheless — at least for the immediate future, the next five years.
From a business point of view, it may even be in the interests of Microsoft, Google, and other giant near monopolies to promote software development practices that smaller competitors and potential competitors simply can’t afford and that will bankrupt them if adopted.
Both code reviews and unit tests are clearly time consuming up front. Code reviews using tools like Google’s Gerrit or Phabricator (a spin-off from Facebook, another super-unicorn) are committee meetings on every line of code.
Imagine my dismay when I had to collaborate with a colleague on that legacy project and his screen displayed Notepad in its full glory. Using “search” to find methods might have been rad back in the nineties, but these days, refraining from using tools such as modern IDEs, version control and code inspection will set you back tremendously. They are now absolutely required for projects of any size.
Using “search” to find methods was not rad back in the 1990’s. IDE’s and code browsers specifically have been in widespread use since the 1980’s. Turbo Pascal (https://en.wikipedia.org/wiki/Turbo_Pascal) was introduced in 1983 and featured a fully functional IDE, soon to be followed by IDE’s in many other products. Version control dates back at least to SCCS (https://en.wikipedia.org/wiki/Source_Code_Control_System) which was released in 1972. RCS was released in 1981 and version control was common in the 1980s and since.
Code reviews have been around for a long time. However, in the 1990’s and earlier they were restricted to relatively special projects such as the Space Shuttle avionics where very high levels or safety and reliability, far beyond most commercial software, were required. This speaks to the “incredibly costly” quote about Google above.
Without more context, it is difficult to evaluate the use of Notepad. Simple code/text editors like Notepad and vim (formerly vi ) are very fast to start up and can be a better option for some quick projects than starting an IDE.
Some IDE’s are particularly hard to use. Early versions of Apple’s Xcode circa 2010 were particularly difficult to use in practice; it has improved somewhat in the current releases.
People vary significantly. Some developers seem to find stripped down tools like vim or Notepad or Notepad++ (on Windows) a better option than complicated IDE’s. I am more of an emacs or IDE person.
The fact that someone else works differently than you do does not mean they are worse (or better) than you. The fact that something works well for someone else also does not mean it will work well for you — or vice versa.
There are sound reasons for duplicating code, cutting and pasting, rather than creating a function or object called in several location in the code. If the developer anticipates that the code may subsequently diverge, then duplication is often best.
Like grand master chess players, highly experienced developers, especially under tight time constraints (like a chess tournament), code by intuition, not by laboriously reasoning out every step. If it feels like the code is likely to diverge in the future, duplicate. If it does not diverge, no problem, it can be merged back later if needed.
In the bad old days of structured design (1980’s) and object-oriented design (OOD — 1990s), software development projects suffered from Big Design Up Front (BDUF), grandiose attempts to design a perfect software system before writing a line of code. This often resulted in massive cost and schedule overruns and total failures. It often proves better to just throw (“hack”) something together quickly — a prototype, proof of concept, Minimum Viable Product (MVP). Just “get something working.”
Inevitably these prototypes and early stage software projects are going to compare poorly to some theoretical perfectly designed system with 20-20 hindsight. That is what seduced people into BDUF twenty, thirty years ago.
Modern Agile software development methodologies are foolishly trying to have it both ways, have an initial quick iteration BUT that first iteration should be perfectly designed up front — beautiful, elegant, with hundreds of tests, endless committee meetings on coding style and design (code reviews), all sorts of supposed best practices, no code duplication, etc. This is a seductive fantasy doomed to fail in most cases.