Skip to content

module openmp

Source: stdlib/openmp.codon


Routine = Function[([i32, cobj], i32)]


Lock @tuple Class is named tuple (cannot write fields)

Fields

a1: i32

a2: i32

a3: i32

a4: i32

a5: i32

a6: i32

a7: i32

a8: i32

Magic methods

__new__()


Ident @tuple Class is named tuple (cannot write fields)

Fields

reserved_1: i32

flags: i32

reserved_2: i32

reserved_3: i32

psource: cobj

Magic methods

__new__(flags: int = 0, source: str = ";unknown;unknown;0;0;;")


LRData @tuple Class is named tuple (cannot write fields)

Fields

routine: Routine


Task @tuple Class is named tuple (cannot write fields)

Fields

shareds: cobj

routine: Routine

flags: i32

x: LRData

y: LRData


TaskWithPrivates[T] @tuple Class is named tuple (cannot write fields)

Fields

task: Task

data: T

T: type


TaskReductionInput @tuple Class is named tuple (cannot write fields)

Fields

reduce_shar: cobj

reduce_orig: cobj

reduce_size: int

reduce_init: cobj

reduce_fini: cobj

reduce_comb: cobj

flags: u32

Magic methods

__new__(reduce_shar, reduce_orig, reduce_size: int, reduce_init: cobj, reduce_comb: cobj)


TaskReductionInputArray @tuple Class is named tuple (cannot write fields)

Fields

len: int

ptr: Ptr[TaskReductionInput]

Magic methods

__setitem__(self, idx: int, x: TaskReductionInput)


flush()


get_num_threads() @pure Function has no side effects and returns same value for same inputs


get_thread_num() @pure Function has no side effects and returns same value for same inputs


get_max_threads() @pure Function has no side effects and returns same value for same inputs


get_num_procs() @pure Function has no side effects and returns same value for same inputs


set_num_threads(num_threads: int)


in_parallel() @pure Function has no side effects and returns same value for same inputs


set_dynamic(dynamic_threads: bool = True)


get_dynamic() @pure Function has no side effects and returns same value for same inputs


get_cancellation() @pure Function has no side effects and returns same value for same inputs


set_schedule(kind: str, chunk_size: int = 0)


get_schedule() @pure Function has no side effects and returns same value for same inputs


get_thread_limit() @pure Function has no side effects and returns same value for same inputs


set_max_active_levels(max_levels: int)


get_max_active_levels() @pure Function has no side effects and returns same value for same inputs


get_level() @pure Function has no side effects and returns same value for same inputs


get_ancestor_thread_num(level: int) @pure Function has no side effects and returns same value for same inputs


get_team_size(level: int) @pure Function has no side effects and returns same value for same inputs


get_active_level() @pure Function has no side effects and returns same value for same inputs


in_final() @pure Function has no side effects and returns same value for same inputs


get_proc_bind() @pure Function has no side effects and returns same value for same inputs


set_default_device(device_num: int)


get_default_device() @pure Function has no side effects and returns same value for same inputs


get_num_devices() @pure Function has no side effects and returns same value for same inputs


get_num_teams() @pure Function has no side effects and returns same value for same inputs


get_team_num() @pure Function has no side effects and returns same value for same inputs


is_initial_device() @pure Function has no side effects and returns same value for same inputs


get_wtime() @pure Function has no side effects and returns same value for same inputs


get_wtick() @pure Function has no side effects and returns same value for same inputs


single(func)


master(func)


ordered(func)


critical(func)


for_par(num_threads: int = -1, chunk_size: int = -1, schedule: Literal[str] = "static", ordered: Literal[bool] = False, collapse: Literal[int] = 0, gpu: Literal[bool] = False)