I just tried it and it doesn't, because it doesn't give the same results as actual software with CRC32 implemented.

In general Wikipedia shows CRCs working like this. In this example they use the simple 4bit divisor 1011.

So this shows that the most significant bit is on the left, and the divisor is moved right according to where the next "1" bit is, and an XOR operation is performed at that point. This process is repeated, until all the data bits have been set to 0. The 3 padding bits appended to the right at the beginning then hold the remainder which is the 3bit CRC.Code:11010011101100 000 <--- input left padded by 3 bits 1011 <--- divisor 01100011101100 000 <--- result 1011 <--- divisor ... 00111011101100 000 1011 00010111101100 000 1011 00000001101100 000 1011 00000000110100 000 1011 00000000011000 000 1011 00000000001110 000 1011 00000000000101 000 101 1 ----------------- 00000000000000 100 <---remainder (3 bits)

However as simple as this looks, it doesn't actually work when repeated with CRC32. For CRC32, I simply took and appended 32 bits to the right, used the polynomial shown in Wikipedia which is:

x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x^1+1

This corresponds to the binary number:

100000100110000010001110110110111

I used this number as the divisor for the CRC operation, because that number is what is (according to Wikipedia) the divisor for CRC32. And my implementation went perfectly just as as Wikipedia's 4bit divisor (CRC3) example did, just with a 33bit divisor (CRC32) instead. However the results do not match those of a known working CRC32 implementation in a 2 other pieces other software (EasyHash and HxD hex editor I tested). Yet the The CRC32 implementation in both of these other pieces of software are identical to each other, showing that my implementation (and that of Wikipedia) is flawed.

Below are the results of what I found, using the simplest possible input, 1 byte all set to 1's (11111111). Which is the ascii text character ÿ.

Here is a screenshot showing what my program outputs (it displays a log of all the intermediate steps of the CRC32 that was calculated, including the data at each step and the position of the divisor at each step).

As you can see if you look at the rightmost 32 bits (the CRC32) in the final output stage of the calculation, the bits are:

10110001111101110100000010110100

Now the output of EasyHash for the same input (which I have verified with HxD hex editor that also has CRC32 calculation implemented) is this:

FF000000

This hexidecimal number corresponds to the binary number:

11111111000000000000000000000000

And of course 10110001111101110100000010110100 is in NO WAY equal to 11111111000000000000000000000000. In fact it is impossible to convert one to the other using operations like bit inversion (NOT gating the data) or even bit order reversal. These 2 outputs look NOTHING alike.

What is wrong with the implementation here? I need some hints on where I (and apparently Wikipedia as well) are going wrong. It looked so simple from the Wikipedia article, but it really doesn't work that way. Any info on the correct way to calculate CRC32 would be REALLY helpful.