The UTF16 parser is returning different byte order than my .NET pair.
I have this .NET test which works/describes behavior.
[Test]
public void TestSHA1()
{
var alg = new System.Security.Cryptography.SHA1Managed();
byte[] bytes = new UnicodeEncoding().GetBytes("ax");
Assert.AreEqual("61-00-78-00", BitConverter.ToString(bytes));
byte[] hash = alg.ComputeHash(bytes);
Assert.AreEqual("E7-77-35-54-8D-77-09-2C-87-B8-30-3E-9F-55-1A-3A-48-B8-0A-A6", BitConverter.ToString(hash));
string base64String = Convert.ToBase64String(hash);
Assert.AreEqual("53c1VI13CSyHuDA+n1UaOki4CqY=", base64String);
}
This is the same data and expectations in Jasmine unit test
describe('CryptoJS test', function () {
describe('SHA1', function () {
it('should match .NET version', function () {
var passwd = CryptoJS.enc.Utf16.parse('ax', true);
expect(passwd.toString()).toBe('61007800');
var hash = CryptoJS.SHA1(passwd);
var hex = CryptoJS.enc.Hex.stringify(hash);
expect(hex).toBe('e77735548d77092c87b8303e9f551a3a48b80aa6');
var base64 = CryptoJS.enc.Base64.stringify(hash);
expect(base64).toBe('53c1VI13CSyHuDA+n1UaOki4CqY=');
});
});
});
Most likely you would like to introduce another encoding or parameter to allow
different byte order. Patch for enc-utf16.js
/**
* Converts a UTF-16 string to a word array.
*
* @param {string} utf16Str The UTF-16 string.
* @param {bool} littleEngian byte order
*
* @return {WordArray} The word array.
*
* @static
*
* @example
*
* var wordArray = CryptoJS.enc.Utf16.parse(utf16String);
*/
parse: function (utf16Str, littleEngian) {
// Shortcut
var utf16StrLength = utf16Str.length;
var le = (typeof littleEngian === "undefined") ? false : littleEngian;
// Convert
var words = [];
for (var i = 0; i < utf16StrLength; i++) {
var ch=utf16Str.charCodeAt(i);
if(le){
ch=ch>>>8|((ch&0xFF)<<8);
}
words[i >>> 1] |= ch << (16 - (i % 2) * 16);
}
return f.create(words, utf16StrLength * 2);
}
What version of the product are you using? On what operating system?
3.0.2, chrome 21