Concurrency
- Julia HPC and Cluster computing.
- Distributed Computing and Grid computing.
- Cloud computing
- Parallel computing
- Hardware architectures (ARM, CUDA, GPU, MIPS) anb compute kernels
Organizations¶
Resources¶
General Concurrency Packages¶
- JuliaActors/Actors.jl : Actor Model concurrent model.
- JuliaArrays/TiledIteration.jl : Julia package to facilitate writing mulithreaded, multidimensional, cache-efficient code.
- JuliaFolds/Folds.jl : A unified interface for sequential, threaded, and distributed folds. The docs list what functions it supports.
- JuliaParallel/MessageUtils.jl : A collection of utilities for messaging.
Bindings¶
- JuliaGPU/ArrayFire.jl : Julia Wrapper for the ArrayFire library.
- JuliaLinearAlgebra/AppleAccelerate.jl : Julia interface to OS X's Accelerate framework.
- JuliaParallel/Elly.jl : Hadoop HDFS and Yarn client.
- JuliaParallel/Hwloc.jl : Wrapper to the hwloc library to provide a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading.
Cloud computing¶
- JuliaCloud/AWS.jl : supports the EC2 and S3 API's, letting you start and stop EC2 instances dynamically.
- JuliaCloud/AWSCore.jl : Amazon Web Services Core Functions and Types.
- JuliaCloud/AWSS3.jl : AWS S3 Simple Storage Service interface for Julia.
- JuliaCloud/GoogleCloud.jl : Google Cloud APIs for Julia.
- JuliaComputing/Kuber.jl : Julia Kubernetes Client.
SIMD Computing¶
- eschnett/SIMD.jl : Explicit SIMD vector operations for Julia.
- JuliaSIMD/LoopVectorization.jl : vectorize your for loop using the
@turbo
macro.
Multi-Threading¶
- carstenbauer/ThreadPinning.jl : Pin Julia threads to CPU processors. Requires the lscpu` command (in virtually all Linux systems). JuliaCon 2023
- JuliaFolds/FLoops.jl : the macro
@floop
, a superset ofThreads.@threads
, for a fast generic iteration over complex collections. - tkf/ThreadsX.jl : Multithreaded base functions such as
map()
,reduce()
,foreach()
.
Multiprocessing and Distributed Computing¶
- Wikipedia: Distributed Computing across multiple compute nodes.
- Wikipedia: Job Scheduler
- Julia at scale topic on discourse.
- magerton/FARMTest.jl : Simple example scripts for running Julia on a SLURM cluster, using kleinhenz/SlurmClusterManager.jl
- ChevronETC/Schedulers.jl : It provides elastic and fault tolerant parallel map and parallel map reduce methods.
- ChrisRackauckas/ParallelDataTransfer.jl : A bunch of helper functions for transferring data between worker processes.
- eschnett/Persist.jl : Running jobs in the background, independent of the Julia shell.
- JuliaParallel/ClusterManagers.jl : Support for different clustering technologies.
- JuliaParallel/Dagger.jl : A framework for out-of-core and parallel computation and hierarchical Scheduling of DAG Structured Computations. Similar to
dask
library in Python. - JuliaParallel/DistributedArrays.jl : A task persistency mechanism based on hash-graphs for Dispatcher.jl.
- JuliaParallel/MPI.jl : Julia interface to the Message Passing Interface (MPI)
- JuliaPluto/Malt.jl : a multiprocessing package for Julia. It is used by fonsp/Pluto.jl to manage the Julia process that notebook code is executed in, as a replacement to Distributed.
- kleinhenz/SlurmClusterManager.jl : julia package for running code on slurm clusters. See magerton/FARMTest.jl for simple example scripts.
- zgornel/DispatcherCache.jl : Tool for building and executing a computation graph given a series of dependent operations.
GPU computing¶
- Wikipedia: GPGPU
- Sample OpenCL notebooks for GPU Julia, and GPU Transpose.
- Blog post on High-Performance GPU Computing in the Julia Programming Language.
- hjabird/CVortex.jl : Julia wrapper for cvortex GPU accelerated vortex filament and vortex particle methods.
- JuliaGPU/AMDGPU.jl : AMD GPU (ROCm) programming in Julia.
- JuliaGPU/ArrayFire.jl : Julia Wrapper for the ArrayFire library.
NVIDIA CUDA¶
- JuliaFolds/FoldsCUDA.jl : provides
Transducers.jl
-compatible fold (reduce) implemented usingCUDA.jl
. This brings the transducers and reducing function combinators implemented in Transducers.jl to GPU. Furthermore, using FLoops.jl, you can write parallel for loops that run on GPU. - JuliaGPU/CUDA.jl : CUDA programming in Julia. See also JuliaCon 2021 video.
- JuliaGPU/NVTX.jl : Julia bindings for NVTX, for instrumenting with the Nvidia Nsight Systems profiler. JuliaCon 2023 video.
- xiaodaigh/CuCountMap.jl : Fast
StatsBase.countmap
for small types on the GPU viaCUDA.jl