最新HONGHU PX
鴻鵠國際為台灣區JACKET代理商,同時也是NVIDIA GPU在台灣的代理商。
針對此產品教育訓練以及任何諮詢 請來電 廖先生 0917-782-811 terence@honghutech.com
LibJacket is designed for engineers, scientists, and analysts who want maximum performance and maximum leverage of GPU resources, without hassling with messy low-level programming details. It runs on NVIDIA GPUs in any system, including servers, workstations, and laptops.
It supports single- and double-percision floating point values, complex numbers, and booleans.
It supports manipulating vectors, matrices, and N-dimensional arrays.
It includes routines for arithmetic, linear algebra, statistics, imaging, signal processing, and related algorithms.
LibJacket是專為希望能達到最高的性能和最大限度地利用GPU資源,而不想被低級編程細節被麻煩到的工程師,科學家和分析師們,。
它運行在任何NVIDIA GPU系統,包括服務器,工作站和筆記本電腦的。
它支持單一精度和雙精度浮點值,複數,和邏輯數據(booleans)。
支持操縱向量,矩陣和N維數組。它包括算術,線性代數,統計,成像,信號處理,及相關演算法的常式。
Learn More
Learn More
It outperforms CPU libraries
It is optimized for any CUDA-enabled GPU. The same code will run on everything: laptops, desktops, servers.
It includes thousands of lines of optimized device code.
It defers computation so it can analyze and batch instructions to increase arithmetic intensity and memory throughput while avoiding unnecessary temporary allocations.
It combines and enhances all the best CUDA libraries available, including the fastest FFT, BLAS, and LAPACK implementations.
· 它優於 CPU 資料庫
· 它為啟用CUDA的GPU優化。相同的代碼將一切上運行:筆記本 電腦,臺 式機,服務器上運行。
· 它包括數千行優化代碼設備。
· 它推遲計算為了分析和處理指令,同時增加算術強度和內存輸送量,避免不必要的臨時配分。
· 它結合並提高了所有CUDA最好的庫,包括最快的FFT,BLAS和LAPACK的設備。
The base LibJacket license enables you to access all LibJacket functions and core runtime system for use on a single GPU.
LibJacket MGL enables Jacketized code to run on multiple GPUs simultaneously in a workstation, supporting from 2 to 8 total GPUs.
LibJacket HPC enables Jacketized code to run on multiple GPUs simultaneously in a cluster.
LibJacket DLA adds many advanced double-precision linear algebra functions.
LibJacket SLA adds sparse matrix support and linear algebra functions.
LibJacket PGI Compatibility adds support for PGI compilers. This is included with LibJacket Base License.
The library can be used by itself in C/C++ applications or integrated with your existing CUDA code.
Here's a stripped down example of Monte-Carlo estimation of Pi:
#include <stdio.h> #include <jacket.h> using namespace jkt; int main() { int n = 20e6; // 20 million random samples f32 x = f32::rand(n,1), y = f32::rand(n,1); // how many fell inside unit circle? float pi = 4.0 * sum_vector(sqrt(x*x + y*y) < 1) / n; printf("pi = %g\n", pi); return 0; }
資料庫可以被用在C/C++的應用程式或結合你現有的CUDA code:
範例: Monte-Carlo estimation of Pi:
#include <stdio.h> #include <jacket.h> using namespace jkt; int main() { int n = 20e6; // 20 million random samples f32 x = f32::rand(n,1), y = f32::rand(n,1); // how many fell inside unit circle? float pi = 4.0 * sum_vector(sqrt(x*x + y*y) < 1) / n; printf("pi = %g\n", pi); return 0; }
LibJacket license 全部都可以連結 LibJacket functions 並且可以使用單一GPU運行
LibJacket MGL enables Jacketized code to run on multiple GPUs simultaneously in a workstation, supporting from 2 to 8 total GPUs.
LibJacket HPC enables Jacketized code to run on multiple GPUs simultaneously in a cluster.
LibJacket DLA adds many advanced double-precision linear algebra functions.
LibJacket SLA adds sparse matrix support and linear algebra functions.
LibJacket PGI Compatibility adds support for PGI compilers. This is included with LibJacket Base License.