Many different programming languages are used to create all kinds of software. And computer software security is extremely important today, and probably will become even more important in the future.
Big Question:
Are all programming languages inherently equal in security viewpoint? Or some are really inherently more secure than some others? (And also, a related question, are some Operating Systems more secure than some others?) How we can compare them objectively for inherent (natural) security?
First, how software bugs are used to break security of computer systems, by hackers or malware?
I think they send a series of instructions/input data to any accessible software, to activate known (and unpatched) bugs. Which create unhandled runtime exceptions, like division by zero, buffer overflow/underflow, array out of bounds, dangling pointer,...
Now, for simplicity, assume we want to compare security of native executables, for a certain OS, compiled using a certain brand and version compiler, for a certain programming language (and its version), like C, C++, Delphi,...
Imagine if we created a table for objectively comparing security as follows:
First column: A (sorted) full list of common runtime exceptions like, division by zero, buffer overflow/underflow, array out of bounds, dangling pointer,...
Next, add one column for each certain language compiler.
Next, we fillout cell values of our table (where each will be -1 (No) or +1 (Yes)), by asking this question:
Is the runtime exception on the left, possible to happen, for the certain language compiler on the top? (Assume the programmer wrote any section of any compiled software, using that certain language compiler, and forgot to add any exception handling for it.)
(If the certain version OS, which we creating this table for, already have general safe handling, for any certain common runtime exception, so that it can never be used by hackers/malware, then we do not need to include it in our table, obviously.)
(If the runtime exception on the left, inherently cannot happen, for the certain language compiler on the top, then the cell value still must be -1 (No). Because that is still an advantage for the certain language compiler on the top. Since all programming languages are Turing-Complete, any algorithm can be implemented in any certain language compiler. Then we must conclude, if the runtime exception on the left, inherently cannot happen, there is no ability lost, but there is an inherent security gained.)
Then in the end, we can compare inherent security of each certain language compiler, which we included in our table, by simply calculating sum value of each column, as an inherent security score. Then smaller sum values would indicate higher inherent security.
But I think if we do statistical analysis on all existing software (for any certain OS version), then we would find, some kind of dangerous runtime exceptions are more common than others. That means if we know relative frequencies (RF) of each common runtime exception (bug) in our table, then we can make our inherent security scores more realistic/accurate, by using relative frequencies as a weight, for each runtime exception on the left.
(So then each cell value would be -1*RF or +1*RF.)
Can we use this kind of programming language compiler security scoring table, to also score and compare, security of different OSs (and their different versions)? I think the answer is yes.
Imagine if we re-evaluated the same security scoring table (same set of row and column titles),
for different OSs (and their different versions). Later, for each table, we calculated sum of all cells in the table, to get a total security score for that OS (version). (Then, again, smaller values would indicate higher inherent security.)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.