### Following the Guide--An Odd Problem with Linking MKL

Some days ago, I asked a problem on Stackoverflow, which is about mkl linking.

Surprisingly, I found no one else has encountered a similar problem. Thus, I post the scenarios in the blog.

Here's the link.

How Do I Discover the Problem? In blitz, my deep learning framework, I have to mix openmp, blas, and cuda together. However, since my icc is the newest such that current nvcc 7.5 does not support it, I use g++, mkl and nvcc instead.

I used the

-fopenmp -lmkl_rt But the thing is, my simple MNIST model could not generate a correct result! After a long period of debugging, I found multi-thread sgemm worked not in an expected way. Well, it really shocked me because I never wonder a famous vender product is wrong.

Following the Guide By reading many documents, I found that by single dynamic library, mkl does not use GNU-thread internally:

https://software.intel.com/en-us/forums/intel-math-kernel-library/to…

Surprisingly, I found no one else has encountered a similar problem. Thus, I post the scenarios in the blog.

Here's the link.

How Do I Discover the Problem? In blitz, my deep learning framework, I have to mix openmp, blas, and cuda together. However, since my icc is the newest such that current nvcc 7.5 does not support it, I use g++, mkl and nvcc instead.

I used the

*single dynamic library*that could simplify many options by the following command:-fopenmp -lmkl_rt But the thing is, my simple MNIST model could not generate a correct result! After a long period of debugging, I found multi-thread sgemm worked not in an expected way. Well, it really shocked me because I never wonder a famous vender product is wrong.

Following the Guide By reading many documents, I found that by single dynamic library, mkl does not use GNU-thread internally:

https://software.intel.com/en-us/forums/intel-math-kernel-library/to…