Matlab and Simulink get enhanced AI functions

Matlab and Simulink get enhanced AI functions

New Products |
The new release 2018b of Mathworks' popular Matlab and Simulink math software tools contains significant enhancements for deep learning as well as new features and improvements in all product families. With its new Deep Learning Toolbox, the software manufacturer also provides a framework for the design and implementation of neural networks. Developers in the fields of image processing, computer vision and signal processing can use the new features to design complex neural network architectures and improve their deep learning models.
By Christoph Hammerschmidt


MathWorks recently joined the ONNX community to promote interoperability and enable collaboration between Matlab users and other deep learning frameworks. With the new ONNX conversion feature in R2018b, developers can import and export models from supported frameworks such as PyTorch, MxNet, and TensorFlow. Thanks to this interoperability, models trained in Matlab can also be used in other frameworks.

Likewise, models trained in other frameworks can be integrated into Matlab, where tasks such as debugging, validation, and deployment on embedded platforms can be performed. In addition, R2018b provides carefully selected reference models that can be accessed using a single line of code. Additional import functions allow the use of models from Caffe and Keras-TensorFlow.

With R2018b, MathWorks offers increased user productivity and improved usability for deep learning workflows in R2018b:

  • The Deep Network Designer app, which allows users to create complex network architectures or modify complex pre-trained networks for transfer learning.
  • Improve network training performance beyond desktop PC capabilities by supporting cloud vendors with Matlab Deep Learning Container on NVIDIA GPU Cloud and Matlab reference architectures for Amazon Web Services and Microsoft Azure
  • Enhanced support for specialized workflows, such as ground truth apps for audio and video data labeling and application-specific datastores, that simplify and accelerate work with large amounts of data.

With the update to R2018b, the GPU Coder further enhances inference performance by supporting NVIDIA libraries and adding optimizations such as autotuning, layer fusion, and buffer minimization. Also added is support for deployment on Intel and ARM platforms with Intel MKL-DNN and the ARM Compute Library.

R2018b is now available and includes updates in the areas of code generation, signal processing and communication as well as verification and validation. The Deep Learning Toolbox succeeds the Neural Networking Toolbox, which will disappear from the product range in the future.

Further information: R2018b Highlights (video)

Linked Articles