As we know, to be able to access to some Microsoft Azure services, there is the SAS (Shared Access Signature) Authentication by sending a token by which we get the rights to perform specific operations.


This token is obtained by building a string containing some information including the URI to access and the expiration time over which to calculate an HMAC (Hash Massage Authentication Code) with SHA256. The result of this hashing operation is encoded in Base64 and the result obtained is inserted in the token (Shared Access Signature) with an appropriate format.


The goal of this short post isn’t to describe the procedure for determining the token but to warn the user about the Base64 conversion functions provided by the .Net Micro Framework.


Performing my tests with a board FEZ Spider to connect to the Service Bus, I have found many times the condition of unauthorized access due to an incorrect token. By performing the same procedure on the PC, everything worked properly. How so ?


Initially I thought of a mistake in the signature (HMAC) calculation and only after I realized there was something wrong in the Base64 encoding.


I finally obtained a case of error following the signing represented through the following bytes:

   1: byte hmac = { 0x16, 0x01, 0x70, 0x76, 0xec, 0xc8, 0xdb, 0x01, 0xf0, 0x6a, 0x60, 0x9a, 0x89, 0x68, 0x6f, 0xef, 0x68, 0x9a, 0xad, 0x10, 0xe7, 0x92, 0x9b, 0xef, 0xfa, 0x10, 0x86, 0x24, 0xf1, 0x72, 0xa6, 0x69 };


If we try to Base64 encode the array of bytes above using the Convert.ToBase64String (hmac) method on PC, the result is as follows:




If we try to perform the same operation with the .Net Micro Framework, the encoding is as follows :




In red and bold, I highlighted the difference between the two encodings but ... what is the reason?


As always, the answer is in the native implementation of the .Net Micro Framework and this time in the Convert.cs file within which there are two different "alphabets" for Base64 encoding: the standard RFC4648 alphabet and a non-standard alphabet.

   1: /// <summary>
   2: /// Conversion array from 6 bit of value into base64 encoded character.
   3: /// </summary>
   4: static char s_rgchBase64EncodingDefault = new char
   5: {
   6: 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', /* 12 */
   7: 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', /* 24 */
   8: 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', /* 36 */
   9: 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', /* 48 */
  10: 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', /* 60 */
  11: '8', '9', '!', '*' /* 64 */
  12: };
  14: static char s_rgchBase64EncodingRFC4648 = new char
  15: {
  16: 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', /* 12 */
  17: 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', /* 24 */
  18: 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', /* 36 */
  19: 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', /* 48 */
  20: 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', /* 60 */
  21: '8', '9', '+', '/' /* 64 */
  22: };
  24: static char s_rgchBase64Encoding = s_rgchBase64EncodingDefault;


These two alphabets differ in the last two characters that are "+" and "/" in the first "!" and "*" in the second. Apparently the "!" is relative to Base64 encoding in regular expressions as "*" is relative to the same encoding in privacy-enhanced mail.


The code shows that the default alphabet (s_rgchBase64Encoding) that is kept here is not that standard !!


How can we fix the problem?


Fortunately, the Convert class provides the static property UseRFC4648Encoding that must be set to true to use the standard RFC4648 Base64 encoding.


My opinion is that it would be appropriate to reverse the logic so that the default encoding is the standard, and for this reason I have already opened an issue on the official website of the .Net Micro Framework on CodePlex.


What do you think about that ??