Artificial limits to the line length is a mistake, it's a totally arbitrary value, and has horrendous knock on effects. People end up choosing bad variable names as otherwise the lines get too long. The intention was to produce readable code that can be viewed and understood, and if that is the intention, then line length should defer to readability, not the other way around.
Overly long names are as bad as overly short names though. FortyCharacterColobNames don't really help, as they're just slower to read. Typically, that's a sign of prefixing or suffixing names with a bunch of needless, repetitive crap.
Long class names aren't so bad, unless you're that guy who names every variable with a short name followed by the full class name. Don't be that guy.
}}} Overly long names are as bad as overly short names though. {{{ --- Coding for Windows around the turn of the century was ridiculous with that gawdawful variable naming convention. What was it called, Hungarian notation or something like that?
I have seen that in Java code. >80 char variable names and 2 or 3 chars different between them. Needless to say, the project was later scrapped because if numerous design defects that could not be fixed and constantly caused problems.
Hungarian notation was actually well motivated for its intended use, and then went horribly wrong. First within Microsoft, and then it spread like a disease as people took MS APIs as gospel.
The original intent, which was smart, for for prefixing row variables and column variables in the Excel codebase with rw and co, so that you'd never accidentally try to add or subtract a row and a column. Both types were ints, so this wasn't something the compiler could check for you, nor was the information redundant. Sadly, it was all downhill from there.
The guidelines were given in an attempt to improve coding at Microsoft. It failed. Then the entire book of Code Complete was intended for Microsoft devs so they'd do better, and it failed, but also succeeded in fooling people into thinking it was actually Microsoft's standard coding guidelines.
Not in C. If you want the mathematical operators to work, you've got ints. typdefs don't change that. It's a fundamental failing in most languages that you can't create something that is as easy and efficient to work with as an int, but is treated as a different type.
But Excel was written in C - pre-C99 C for that matter. The horror that Hungarian Notation became was the result of the limited type expressiveness of C.
Pascal, Modula, Oberon, Ada - they all do that. No idea about C++, I have to admit.
OK, but any languages anyone uses in this century? Not saying it's a bad idea, just an abandoned one. There's no easy way in C++, but you can do it with a lot of effort. Maybe worth doing for a row class and a column class in a spreadsheet, but not for more normal use.
You had structs and dedicated functions. That's all that is required. You don't need anymore than that.
Maybe worth doing for a row class and a column class in a spreadsheet, but not for more normal use.
There are plenty of libraries for such uses - like unit libraries in the case of physical calculations.
A row and column class is simple and doesn't require much work.
The point is you're supposed to use dedicated classes to represent independent concepts and its limited set of operations, and that was always the case even back in the C89 days.
You had structs and dedicated functions. That's all that is required. You don't need anymore than that.
Useless for the purpose. Speed matters. Size matters. Doubly so back in the tail end of 16-bit computers, when Excel was first written.
A row and column class is simple and doesn't require much work.
Every problem has a simple, easy to understand, wrong answer.
The point is you're supposed to use dedicated classes to represent independent concepts and its limited set of operations,
The point of software development is first to solve the customer's problem. Fail to do that, and it doesn't matter how pretty your code is as it will never get a new version.
Useless for the purpose. Speed matters. Size matters.
A struct, with a single integer member is too big?
The point of software development is first to solve the customer's problem. Fail to do that, and it doesn't matter how pretty your code
So solving the customer's problem with bugs is a good thing? This is not about prettiness of code. In fact, it's pretty ugly, compared to using an integer. The types aren't there to make it pretty. The types are there to prevent actual bugs. The fact you can't understand this tells me you write pretty shit software.
A struct, with a single integer member is too big?
Yes, typically 2-4 ints in size. Some compilers have pragmas to help, but when ints were 16 bits I don't think it was possible to get that with pragmas (it has been a while). Often stucts are 8-byte aligned, if not 16-byte, to ensure whatever is the first element of the srtuct is aligned.
Also, old compilers simply wouldn't put structs on the stack. Even if you declared the function that way, you'd get a pointer.
So solving the customer's problem with bugs is a good thing? This is not about prettiness of code. In fact, it's pretty ugly, compared to using an integer. The types aren't there to make it pretty. The types are there to prevent actual bugs. The fact you can't understand this tells me you write pretty shit software.
You've simply no familiarity with the sort of constrained environment programming C is appropr
In C++, you'd use operator overloading, but for some strange reason people hate operator overloading even more than Hungarian notation or userspace bugs.
And operator overloading should be used judiciously when it makes sense. So something like adding a row to a column that doesn't make any semantic sense wouldn't have an operator for it.
Yes in C. You just use a struct. The fact that it stops people from using mathematical operations is a good thing because it makes it an error to misuse things with the wrong type. There is no cause for, say, adding a column to a row. If there was, you'd write a simple, named function to do it that makes it immediately obvious what it was doing.
I actually like to believe it comes from a misunderstanding of language. Its originator says "types" in his paper, but actually means "kinds" of variables (i.e. by purpose they fulfill, rather than by what they mean in terms of data typing). E.g.: "int imgWidth, imgHeight; float imgScale;" vs "int winWidth, winHeight; float winZoom;".
You keep the variables that belong to the same logical thing you try to do together - kind of like micro-structures.
Its originator says "types" in his paper, but actually means "kinds" of variables (i.e. by purpose they fulfill, rather than by what they mean in terms of data typing)
Yes! A thousand times this.
The best explanation of the original vision of Hungarian and what's good about it is this essay:
Sense at last (Score:5, Interesting)
Artificial limits to the line length is a mistake, it's a totally arbitrary value, and has horrendous knock on effects. People end up choosing bad variable names as otherwise the lines get too long. The intention was to produce readable code that can be viewed and understood, and if that is the intention, then line length should defer to readability, not the other way around.
Re: (Score:5, Funny)
Yes! Give me a hard limit of 80 character lines and I'll show you a bunch of 3-letter variable names!
A.
Re: (Score:2)
Overly long names are as bad as overly short names though. FortyCharacterColobNames don't really help, as they're just slower to read. Typically, that's a sign of prefixing or suffixing names with a bunch of needless, repetitive crap.
Long class names aren't so bad, unless you're that guy who names every variable with a short name followed by the full class name. Don't be that guy.
Re:Sense at last (Score:2)
Re:Sense at last (Score:4, Interesting)
I have seen that in Java code. >80 char variable names and 2 or 3 chars different between them. Needless to say, the project was later scrapped because if numerous design defects that could not be fixed and constantly caused problems.
Re:Sense at last (Score:5, Informative)
Hungarian notation was actually well motivated for its intended use, and then went horribly wrong. First within Microsoft, and then it spread like a disease as people took MS APIs as gospel.
The original intent, which was smart, for for prefixing row variables and column variables in the Excel codebase with rw and co, so that you'd never accidentally try to add or subtract a row and a column. Both types were ints, so this wasn't something the compiler could check for you, nor was the information redundant. Sadly, it was all downhill from there.
Re: (Score:1)
The guidelines were given in an attempt to improve coding at Microsoft. It failed. Then the entire book of Code Complete was intended for Microsoft devs so they'd do better, and it failed, but also succeeded in fooling people into thinking it was actually Microsoft's standard coding guidelines.
Re: (Score:2)
Both types were ints, so this wasn't something the compiler could check for you,
Except it could - by making the concepts different types. That's what types are for, and was possible even back then.
Re: (Score:2)
Not in C. If you want the mathematical operators to work, you've got ints. typdefs don't change that. It's a fundamental failing in most languages that you can't create something that is as easy and efficient to work with as an int, but is treated as a different type.
Re: (Score:2)
Pascal, Modula, Oberon, Ada - they all do that. No idea about C++, I have to admit.
Re: (Score:2)
But Excel was written in C - pre-C99 C for that matter. The horror that Hungarian Notation became was the result of the limited type expressiveness of C.
Pascal, Modula, Oberon, Ada - they all do that. No idea about C++, I have to admit.
OK, but any languages anyone uses in this century? Not saying it's a bad idea, just an abandoned one. There's no easy way in C++, but you can do it with a lot of effort. Maybe worth doing for a row class and a column class in a spreadsheet, but not for more normal use.
Re: (Score:2)
Maybe worth doing for a row class and a column class in a spreadsheet, but not for more normal use.
There are plenty of libraries for such uses - like unit libraries in the case of physical calculations.
A row and column class is simple and doesn't require much work.
The point is you're supposed to use dedicated classes to represent independent concepts and its limited set of operations, and that was always the case even back in the C89 days.
Re: (Score:2)
You had structs and dedicated functions. That's all that is required. You don't need anymore than that.
Useless for the purpose. Speed matters. Size matters. Doubly so back in the tail end of 16-bit computers, when Excel was first written.
A row and column class is simple and doesn't require much work.
Every problem has a simple, easy to understand, wrong answer.
The point is you're supposed to use dedicated classes to represent independent concepts and its limited set of operations,
The point of software development is first to solve the customer's problem. Fail to do that, and it doesn't matter how pretty your code is as it will never get a new version.
Re: (Score:2)
Useless for the purpose. Speed matters. Size matters.
A struct, with a single integer member is too big?
The point of software development is first to solve the customer's problem. Fail to do that, and it doesn't matter how pretty your code
So solving the customer's problem with bugs is a good thing? This is not about prettiness of code. In fact, it's pretty ugly, compared to using an integer. The types aren't there to make it pretty. The types are there to prevent actual bugs. The fact you can't understand this tells me you write pretty shit software.
Re: (Score:2)
A struct, with a single integer member is too big?
Yes, typically 2-4 ints in size. Some compilers have pragmas to help, but when ints were 16 bits I don't think it was possible to get that with pragmas (it has been a while). Often stucts are 8-byte aligned, if not 16-byte, to ensure whatever is the first element of the srtuct is aligned.
Also, old compilers simply wouldn't put structs on the stack. Even if you declared the function that way, you'd get a pointer.
So solving the customer's problem with bugs is a good thing? This is not about prettiness of code. In fact, it's pretty ugly, compared to using an integer. The types aren't there to make it pretty. The types are there to prevent actual bugs. The fact you can't understand this tells me you write pretty shit software.
You've simply no familiarity with the sort of constrained environment programming C is appropr
Re: (Score:2)
Re: (Score:2)
And operator overloading should be used judiciously when it makes sense. So something like adding a row to a column that doesn't make any semantic sense wouldn't have an operator for it.
Re: (Score:2)
That's what having a type system is for.
Re: Sense at last (Score:3)
I actually like to believe it comes from a misunderstanding of language. Its originator says "types" in his paper, but actually means "kinds" of variables (i.e. by purpose they fulfill, rather than by what they mean in terms of data typing). E.g.: "int imgWidth, imgHeight; float imgScale;" vs "int winWidth, winHeight; float winZoom;".
You keep the variables that belong to the same logical thing you try to do together - kind of like micro-structures.
Re: (Score:2)
Its originator says "types" in his paper, but actually means "kinds" of variables (i.e. by purpose they fulfill, rather than by what they mean in terms of data typing)
Yes! A thousand times this.
The best explanation of the original vision of Hungarian and what's good about it is this essay:
https://www.joelonsoftware.com/2005/05/11/making-wrong-code-look-wrong/ [joelonsoftware.com]
I spent some time as a very junior guy working in the Apps group at Microsoft and I will say without apology that I actually like Apps Hungarian.
When