LLVM OpenMP 19.0.0git
Atomic Operations

These functions are used for implementing the many different varieties of atomic operations. More...

int __kmp_atomic_mode = 1
 

Detailed Description

These functions are used for implementing the many different varieties of atomic operations.

The compiler is at liberty to inline atomic operations that are naturally supported by the target architecture. For instance on IA-32 architecture an atomic like this can be inlined

static int s = 0;
#pragma omp atomic
s++;
void const char const char int ITT_FORMAT __itt_group_sync s

using the single instruction: lock; incl s

However the runtime does provide entrypoints for these operations to support compilers that choose not to inline them. (For instance, __kmpc_atomic_fixed4_add could be used to perform the increment above.)

The names of the functions are encoded by using the data type name and the operation name, as in these tables.

Data Type Data type encoding
int8_t fixed1
uint8_t fixed1u
int16_t fixed2
uint16_t fixed2u
int32_t fixed4
uint32_t fixed4u
int32_t fixed8
uint32_t fixed8u
float float4
double float8
float 10 (8087 eighty bit float) float10
complex<float> cmplx4
complex<double> cmplx8
complex<float10> cmplx10


Operation Operation encoding
+ add
- sub
* mul
/ div
& andb
<< shl
>> shr
| orb
xor
&& andl
|| orl
maximum max
minimum min
.eqv. eqv
.neqv. neqv


For non-commutative operations, _rev can also be added for the reversed operation. For the functions that capture the result, the suffix _cpt is added.

Update Functions

The general form of an atomic function that just performs an update (without a capture)

void __kmpc_atomic_<datatype>_<operation>( ident_t *id_ref, int gtid, TYPE *
lhs, TYPE rhs );
#define TYPE
Parameters
ident_ta pointer to source location
gtidthe global thread id
lhsa pointer to the left operand
rhsthe right operand

<tt>capture</tt> functions

The capture functions perform an atomic update and return a result, which is either the value before the capture, or that after. They take an additional argument to determine which result is returned. Their general form is therefore

TYPE __kmpc_atomic_<datatype>_<operation>_cpt( ident_t *id_ref, int gtid, TYPE *
lhs, TYPE rhs, int flag );
volatile int flag
Parameters
ident_ta pointer to source location
gtidthe global thread id
lhsa pointer to the left operand
rhsthe right operand
flagone if the result is to be captured after the operation, zero if captured before.

The one set of exceptions to this is the complex<float> type where the value is not returned, rather an extra argument pointer is passed.

They look like

void __kmpc_atomic_cmplx4_<op>_cpt( ident_t *id_ref, int gtid, kmp_cmplx32 *
lhs, kmp_cmplx32 rhs, kmp_cmplx32 * out, int flag );
float _Complex kmp_cmplx32
Definition: kmp_atomic.h:201

Read and Write Operations

The OpenMP* standard now supports atomic operations that simply ensure that the value is read or written atomically, with no modification performed. In many cases on IA-32 architecture these operations can be inlined since the architecture guarantees that no tearing occurs on aligned objects accessed with a single memory operation of up to 64 bits in size.

The general form of the read operations is

TYPE __kmpc_atomic_<type>_rd ( ident_t *id_ref, int gtid, TYPE * loc );
static id loc

For the write operations the form is

void __kmpc_atomic_<type>_wr ( ident_t *id_ref, int gtid, TYPE * lhs, TYPE rhs
);

Full list of functions

This leads to the generation of 376 atomic functions, as follows.

Functions for integers

There are versions here for integers of size 1,2,4 and 8 bytes both signed and unsigned (where that matters).

__kmpc_atomic_fixed1_add_fp
__kmpc_atomic_fixed1_div_cpt_rev
__kmpc_atomic_fixed1_div_fp
__kmpc_atomic_fixed1_div_rev
__kmpc_atomic_fixed1_mul_fp
__kmpc_atomic_fixed1_shl_cpt_rev
__kmpc_atomic_fixed1_shl_rev
__kmpc_atomic_fixed1_shr_cpt_rev
__kmpc_atomic_fixed1_shr_rev
__kmpc_atomic_fixed1_sub_cpt_rev
__kmpc_atomic_fixed1_sub_fp
__kmpc_atomic_fixed1_sub_rev
__kmpc_atomic_fixed1_swp
__kmpc_atomic_fixed1u_add_fp
__kmpc_atomic_fixed1u_sub_fp
__kmpc_atomic_fixed1u_mul_fp
__kmpc_atomic_fixed1u_div_cpt_rev
__kmpc_atomic_fixed1u_div_fp
__kmpc_atomic_fixed1u_div_rev
__kmpc_atomic_fixed1u_shr_cpt_rev
__kmpc_atomic_fixed1u_shr_rev
__kmpc_atomic_fixed2_add_fp
__kmpc_atomic_fixed2_div_cpt_rev
__kmpc_atomic_fixed2_div_fp
__kmpc_atomic_fixed2_div_rev
__kmpc_atomic_fixed2_mul_fp
__kmpc_atomic_fixed2_shl_cpt_rev
__kmpc_atomic_fixed2_shl_rev
__kmpc_atomic_fixed2_shr_cpt_rev
__kmpc_atomic_fixed2_shr_rev
__kmpc_atomic_fixed2_sub_cpt_rev
__kmpc_atomic_fixed2_sub_fp
__kmpc_atomic_fixed2_sub_rev
__kmpc_atomic_fixed2_swp
__kmpc_atomic_fixed2u_add_fp
__kmpc_atomic_fixed2u_sub_fp
__kmpc_atomic_fixed2u_mul_fp
__kmpc_atomic_fixed2u_div_cpt_rev
__kmpc_atomic_fixed2u_div_fp
__kmpc_atomic_fixed2u_div_rev
__kmpc_atomic_fixed2u_shr_cpt_rev
__kmpc_atomic_fixed2u_shr_rev
__kmpc_atomic_fixed4_add_fp
__kmpc_atomic_fixed4_div_cpt_rev
__kmpc_atomic_fixed4_div_fp
__kmpc_atomic_fixed4_div_rev
__kmpc_atomic_fixed4_mul_fp
__kmpc_atomic_fixed4_shl_cpt_rev
__kmpc_atomic_fixed4_shl_rev
__kmpc_atomic_fixed4_shr_cpt_rev
__kmpc_atomic_fixed4_shr_rev
__kmpc_atomic_fixed4_sub_cpt_rev
__kmpc_atomic_fixed4_sub_fp
__kmpc_atomic_fixed4_sub_rev
__kmpc_atomic_fixed4_swp
__kmpc_atomic_fixed4u_add_fp
__kmpc_atomic_fixed4u_sub_fp
__kmpc_atomic_fixed4u_mul_fp
__kmpc_atomic_fixed4u_div_cpt_rev
__kmpc_atomic_fixed4u_div_fp
__kmpc_atomic_fixed4u_div_rev
__kmpc_atomic_fixed4u_shr_cpt_rev
__kmpc_atomic_fixed4u_shr_rev
__kmpc_atomic_fixed8_add_fp
__kmpc_atomic_fixed8_div_cpt_rev
__kmpc_atomic_fixed8_div_fp
__kmpc_atomic_fixed8_div_rev
__kmpc_atomic_fixed8_mul_fp
__kmpc_atomic_fixed8_shl_cpt_rev
__kmpc_atomic_fixed8_shl_rev
__kmpc_atomic_fixed8_shr_cpt_rev
__kmpc_atomic_fixed8_shr_rev
__kmpc_atomic_fixed8_sub_cpt_rev
__kmpc_atomic_fixed8_sub_fp
__kmpc_atomic_fixed8_sub_rev
__kmpc_atomic_fixed8_swp
__kmpc_atomic_fixed8u_add_fp
__kmpc_atomic_fixed8u_sub_fp
__kmpc_atomic_fixed8u_mul_fp
__kmpc_atomic_fixed8u_div_cpt_rev
__kmpc_atomic_fixed8u_div_fp
__kmpc_atomic_fixed8u_div_rev
__kmpc_atomic_fixed8u_shr_cpt_rev
__kmpc_atomic_fixed8u_shr_rev
void __kmpc_atomic_fixed8_neqv(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed1_mul_float8(ident_t *id_ref, int gtid, char *lhs, kmp_real64 rhs)
kmp_int32 __kmpc_atomic_fixed4_orl_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed1_orb(ident_t *id_ref, int gtid, char *lhs, char rhs)
kmp_int64 __kmpc_atomic_fixed8_sub_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
short __kmpc_atomic_fixed2_shl_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed4_eqv(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed8_andb(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed4_andl(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
kmp_int32 __kmpc_atomic_fixed4_add_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed8_min(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
char __kmpc_atomic_fixed1_xor_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
short __kmpc_atomic_fixed2_mul_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed4_orl(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
kmp_int32 __kmpc_atomic_fixed4_max_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
kmp_uint32 __kmpc_atomic_fixed4u_shr_cpt(ident_t *id_ref, int gtid, kmp_uint32 *lhs, kmp_uint32 rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_div_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed1u_shr(ident_t *id_ref, int gtid, unsigned char *lhs, unsigned char rhs)
char __kmpc_atomic_fixed1_add_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed4u_div(ident_t *id_ref, int gtid, kmp_uint32 *lhs, kmp_uint32 rhs)
void __kmpc_atomic_fixed4_andb(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
char __kmpc_atomic_fixed1_mul_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
kmp_int32 __kmpc_atomic_fixed4_shr_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed2_shr(ident_t *id_ref, int gtid, short *lhs, short rhs)
char __kmpc_atomic_fixed1_neqv_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed4_max(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
char __kmpc_atomic_fixed1_andb_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed2_shl(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int64 __kmpc_atomic_fixed8_eqv_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed2_div_float8(ident_t *id_ref, int gtid, short *lhs, kmp_real64 rhs)
kmp_int64 __kmpc_atomic_fixed8_neqv_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed1_neqv(ident_t *id_ref, int gtid, char *lhs, char rhs)
char __kmpc_atomic_fixed1_rd(ident_t *id_ref, int gtid, char *loc)
void __kmpc_atomic_fixed4_mul(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed4u_shr(ident_t *id_ref, int gtid, kmp_uint32 *lhs, kmp_uint32 rhs)
kmp_int64 __kmpc_atomic_fixed8_xor_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed2_wr(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int64 __kmpc_atomic_fixed8_orl_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed1_shr(ident_t *id_ref, int gtid, char *lhs, char rhs)
short __kmpc_atomic_fixed2_andl_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed8u_shr(ident_t *id_ref, int gtid, kmp_uint64 *lhs, kmp_uint64 rhs)
void __kmpc_atomic_fixed8_andl(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed8_mul_float8(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_fixed4_orb(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
unsigned short __kmpc_atomic_fixed2u_shr_cpt(ident_t *id_ref, int gtid, unsigned short *lhs, unsigned short rhs, int flag)
void __kmpc_atomic_fixed8_xor(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
short __kmpc_atomic_fixed2_max_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed8_eqv(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
char __kmpc_atomic_fixed1_shl_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
char __kmpc_atomic_fixed1_orb_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed4_neqv(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
kmp_int32 __kmpc_atomic_fixed4_xor_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_max_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed2_min(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int64 __kmpc_atomic_fixed8_rd(ident_t *id_ref, int gtid, kmp_int64 *loc)
void __kmpc_atomic_fixed1_mul(ident_t *id_ref, int gtid, char *lhs, char rhs)
unsigned char __kmpc_atomic_fixed1u_div_cpt(ident_t *id_ref, int gtid, unsigned char *lhs, unsigned char rhs, int flag)
void __kmpc_atomic_fixed4_shr(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
kmp_int64 __kmpc_atomic_fixed8_shl_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_andb_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
char __kmpc_atomic_fixed1_max_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed2_andl(ident_t *id_ref, int gtid, short *lhs, short rhs)
void __kmpc_atomic_fixed2_orl(ident_t *id_ref, int gtid, short *lhs, short rhs)
short __kmpc_atomic_fixed2_andb_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_andl_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
short __kmpc_atomic_fixed2_orl_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
char __kmpc_atomic_fixed1_shr_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
short __kmpc_atomic_fixed2_neqv_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed8_orb(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
kmp_int32 __kmpc_atomic_fixed4_andb_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
char __kmpc_atomic_fixed1_eqv_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
char __kmpc_atomic_fixed1_div_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed8_max(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
kmp_int64 __kmpc_atomic_fixed8_shr_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
char __kmpc_atomic_fixed1_sub_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
unsigned char __kmpc_atomic_fixed1u_shr_cpt(ident_t *id_ref, int gtid, unsigned char *lhs, unsigned char rhs, int flag)
void __kmpc_atomic_fixed2_add(ident_t *id_ref, int gtid, short *lhs, short rhs)
void __kmpc_atomic_fixed4_sub(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed1_andb(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed4_div_float8(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_real64 rhs)
void __kmpc_atomic_fixed8_shl(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed4_min(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed1_add(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed8u_div(ident_t *id_ref, int gtid, kmp_uint64 *lhs, kmp_uint64 rhs)
char __kmpc_atomic_fixed1_andl_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed4_wr(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
short __kmpc_atomic_fixed2_xor_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed2_orb(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int32 __kmpc_atomic_fixed4_div_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
unsigned short __kmpc_atomic_fixed2u_div_cpt(ident_t *id_ref, int gtid, unsigned short *lhs, unsigned short rhs, int flag)
void __kmpc_atomic_fixed4_div(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed1_wr(ident_t *id_ref, int gtid, char *lhs, char rhs)
kmp_uint64 __kmpc_atomic_fixed8u_shr_cpt(ident_t *id_ref, int gtid, kmp_uint64 *lhs, kmp_uint64 rhs, int flag)
kmp_uint32 __kmpc_atomic_fixed4u_div_cpt(ident_t *id_ref, int gtid, kmp_uint32 *lhs, kmp_uint32 rhs, int flag)
void __kmpc_atomic_fixed1_shl(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed8_add(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
char __kmpc_atomic_fixed1_orl_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed8_div(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
char __kmpc_atomic_fixed1_min_cpt(ident_t *id_ref, int gtid, char *lhs, char rhs, int flag)
void __kmpc_atomic_fixed1_max(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed1_xor(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed1_div_float8(ident_t *id_ref, int gtid, char *lhs, kmp_real64 rhs)
void __kmpc_atomic_fixed8_sub(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
kmp_int32 __kmpc_atomic_fixed4_shl_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed1_eqv(ident_t *id_ref, int gtid, char *lhs, char rhs)
kmp_int32 __kmpc_atomic_fixed4_eqv_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
kmp_uint64 __kmpc_atomic_fixed8u_div_cpt(ident_t *id_ref, int gtid, kmp_uint64 *lhs, kmp_uint64 rhs, int flag)
kmp_int32 __kmpc_atomic_fixed4_neqv_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed4_xor(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed4_mul_float8(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_real64 rhs)
void __kmpc_atomic_fixed1_min(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed1_sub(ident_t *id_ref, int gtid, char *lhs, char rhs)
short __kmpc_atomic_fixed2_orb_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
short __kmpc_atomic_fixed2_div_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed2_sub(ident_t *id_ref, int gtid, short *lhs, short rhs)
void __kmpc_atomic_fixed2_eqv(ident_t *id_ref, int gtid, short *lhs, short rhs)
short __kmpc_atomic_fixed2_rd(ident_t *id_ref, int gtid, short *loc)
kmp_int32 __kmpc_atomic_fixed4_rd(ident_t *id_ref, int gtid, kmp_int32 *loc)
kmp_int32 __kmpc_atomic_fixed4_andl_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed2_mul(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int64 __kmpc_atomic_fixed8_min_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
kmp_int32 __kmpc_atomic_fixed4_orb_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed8_orl(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed1_orl(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed2_max(ident_t *id_ref, int gtid, short *lhs, short rhs)
short __kmpc_atomic_fixed2_shr_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_mul_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed2u_shr(ident_t *id_ref, int gtid, unsigned short *lhs, unsigned short rhs)
short __kmpc_atomic_fixed2_min_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed2_neqv(ident_t *id_ref, int gtid, short *lhs, short rhs)
kmp_int32 __kmpc_atomic_fixed4_sub_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
short __kmpc_atomic_fixed2_sub_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed1_div(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed2_div(ident_t *id_ref, int gtid, short *lhs, short rhs)
void __kmpc_atomic_fixed8_wr(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed1u_div(ident_t *id_ref, int gtid, unsigned char *lhs, unsigned char rhs)
void __kmpc_atomic_fixed2_xor(ident_t *id_ref, int gtid, short *lhs, short rhs)
void __kmpc_atomic_fixed4_shl(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed8_shr(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
void __kmpc_atomic_fixed2_andb(ident_t *id_ref, int gtid, short *lhs, short rhs)
short __kmpc_atomic_fixed2_eqv_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)
void __kmpc_atomic_fixed2u_div(ident_t *id_ref, int gtid, unsigned short *lhs, unsigned short rhs)
kmp_int32 __kmpc_atomic_fixed4_min_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed4_add(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs)
void __kmpc_atomic_fixed8_mul(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs)
kmp_int32 __kmpc_atomic_fixed4_mul_cpt(ident_t *id_ref, int gtid, kmp_int32 *lhs, kmp_int32 rhs, int flag)
void __kmpc_atomic_fixed1_andl(ident_t *id_ref, int gtid, char *lhs, char rhs)
void __kmpc_atomic_fixed2_mul_float8(ident_t *id_ref, int gtid, short *lhs, kmp_real64 rhs)
kmp_int64 __kmpc_atomic_fixed8_orb_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
kmp_int64 __kmpc_atomic_fixed8_add_cpt(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_int64 rhs, int flag)
void __kmpc_atomic_fixed8_div_float8(ident_t *id_ref, int gtid, kmp_int64 *lhs, kmp_real64 rhs)
short __kmpc_atomic_fixed2_add_cpt(ident_t *id_ref, int gtid, short *lhs, short rhs, int flag)

Functions for floating point

There are versions here for floating point numbers of size 4, 8, 10 and 16 bytes. (Ten byte floats are used by X87, but are now rare).

__kmpc_atomic_float4_add_fp
__kmpc_atomic_float4_div_cpt_rev
__kmpc_atomic_float4_div_fp
__kmpc_atomic_float4_div_rev
__kmpc_atomic_float4_mul_fp
__kmpc_atomic_float4_sub_cpt_rev
__kmpc_atomic_float4_sub_fp
__kmpc_atomic_float4_sub_rev
__kmpc_atomic_float4_swp
__kmpc_atomic_float8_add_fp
__kmpc_atomic_float8_div_cpt_rev
__kmpc_atomic_float8_div_fp
__kmpc_atomic_float8_div_rev
__kmpc_atomic_float8_mul_fp
__kmpc_atomic_float8_sub_cpt_rev
__kmpc_atomic_float8_sub_fp
__kmpc_atomic_float8_sub_rev
__kmpc_atomic_float8_swp
__kmpc_atomic_float10_add_fp
__kmpc_atomic_float10_div_cpt_rev
__kmpc_atomic_float10_div_fp
__kmpc_atomic_float10_div_rev
__kmpc_atomic_float10_mul_fp
__kmpc_atomic_float10_sub_cpt_rev
__kmpc_atomic_float10_sub_fp
__kmpc_atomic_float10_sub_rev
__kmpc_atomic_float10_swp
__kmpc_atomic_float16_add
__kmpc_atomic_float16_add_cpt
__kmpc_atomic_float16_div
__kmpc_atomic_float16_div_cpt
__kmpc_atomic_float16_div_cpt_rev
__kmpc_atomic_float16_div_rev
__kmpc_atomic_float16_max
__kmpc_atomic_float16_max_cpt
__kmpc_atomic_float16_min
__kmpc_atomic_float16_min_cpt
__kmpc_atomic_float16_mul
__kmpc_atomic_float16_mul_cpt
__kmpc_atomic_float16_rd
__kmpc_atomic_float16_sub
__kmpc_atomic_float16_sub_cpt
__kmpc_atomic_float16_sub_cpt_rev
__kmpc_atomic_float16_sub_rev
__kmpc_atomic_float16_swp
__kmpc_atomic_float16_wr
kmp_real64 __kmpc_atomic_float8_sub_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
kmp_real32 __kmpc_atomic_float4_add_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)
long double __kmpc_atomic_float10_div_cpt(ident_t *id_ref, int gtid, long double *lhs, long double rhs, int flag)
kmp_real64 __kmpc_atomic_float8_div_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
void __kmpc_atomic_float10_mul(ident_t *id_ref, int gtid, long double *lhs, long double rhs)
void __kmpc_atomic_float4_sub(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
void __kmpc_atomic_float8_max(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float8_div(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float10_sub(ident_t *id_ref, int gtid, long double *lhs, long double rhs)
kmp_real32 __kmpc_atomic_float4_sub_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)
void __kmpc_atomic_float4_min(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
kmp_real64 __kmpc_atomic_float8_max_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
void __kmpc_atomic_float8_min(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float4_add(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
long double __kmpc_atomic_float10_rd(ident_t *id_ref, int gtid, long double *loc)
void __kmpc_atomic_float8_sub(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float4_add_float8(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float8_wr(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
kmp_real32 __kmpc_atomic_float4_rd(ident_t *id_ref, int gtid, kmp_real32 *loc)
void __kmpc_atomic_float8_mul(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
kmp_real32 __kmpc_atomic_float4_max_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)
long double __kmpc_atomic_float10_sub_cpt(ident_t *id_ref, int gtid, long double *lhs, long double rhs, int flag)
long double __kmpc_atomic_float10_add_cpt(ident_t *id_ref, int gtid, long double *lhs, long double rhs, int flag)
void __kmpc_atomic_float4_div_float8(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float4_sub_float8(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float4_wr(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
kmp_real32 __kmpc_atomic_float4_min_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)
void __kmpc_atomic_float10_add(ident_t *id_ref, int gtid, long double *lhs, long double rhs)
void __kmpc_atomic_float4_mul_float8(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real64 rhs)
kmp_real64 __kmpc_atomic_float8_min_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
kmp_real64 __kmpc_atomic_float8_rd(ident_t *id_ref, int gtid, kmp_real64 *loc)
kmp_real32 __kmpc_atomic_float4_div_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)
kmp_real64 __kmpc_atomic_float8_mul_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
long double __kmpc_atomic_float10_mul_cpt(ident_t *id_ref, int gtid, long double *lhs, long double rhs, int flag)
void __kmpc_atomic_float10_wr(ident_t *id_ref, int gtid, long double *lhs, long double rhs)
void __kmpc_atomic_float4_max(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
void __kmpc_atomic_float8_add(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs)
void __kmpc_atomic_float10_div(ident_t *id_ref, int gtid, long double *lhs, long double rhs)
void __kmpc_atomic_float4_div(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
kmp_real64 __kmpc_atomic_float8_add_cpt(ident_t *id_ref, int gtid, kmp_real64 *lhs, kmp_real64 rhs, int flag)
void __kmpc_atomic_float4_mul(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs)
kmp_real32 __kmpc_atomic_float4_mul_cpt(ident_t *id_ref, int gtid, kmp_real32 *lhs, kmp_real32 rhs, int flag)

Functions for Complex types

Functions for complex types whose component floating point variables are of size 4,8,10 or 16 bytes. The names here are based on the size of the component float, not* the size of the complex type. So __kmpc_atomic_cmplx8_add is an operation on a complex<double> or complex(kind=8), not complex<float>.

__kmpc_atomic_cmplx4_div_cpt_rev
__kmpc_atomic_cmplx4_div_rev
__kmpc_atomic_cmplx4_sub_cpt_rev
__kmpc_atomic_cmplx4_sub_rev
__kmpc_atomic_cmplx4_swp
__kmpc_atomic_cmplx8_div_cpt_rev
__kmpc_atomic_cmplx8_div_rev
__kmpc_atomic_cmplx8_sub_cpt_rev
__kmpc_atomic_cmplx8_sub_rev
__kmpc_atomic_cmplx8_swp
__kmpc_atomic_cmplx10_div_cpt_rev
__kmpc_atomic_cmplx10_div_rev
__kmpc_atomic_cmplx10_sub_cpt_rev
__kmpc_atomic_cmplx10_sub_rev
__kmpc_atomic_cmplx10_swp
__kmpc_atomic_cmplx16_add
__kmpc_atomic_cmplx16_add_cpt
__kmpc_atomic_cmplx16_div
__kmpc_atomic_cmplx16_div_cpt
__kmpc_atomic_cmplx16_div_cpt_rev
__kmpc_atomic_cmplx16_div_rev
__kmpc_atomic_cmplx16_mul
__kmpc_atomic_cmplx16_mul_cpt
__kmpc_atomic_cmplx16_rd
__kmpc_atomic_cmplx16_sub
__kmpc_atomic_cmplx16_sub_cpt
__kmpc_atomic_cmplx16_sub_cpt_rev
__kmpc_atomic_cmplx16_swp
__kmpc_atomic_cmplx16_wr
kmp_cmplx80 __kmpc_atomic_cmplx10_sub_cpt(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs, int flag)
void __kmpc_atomic_cmplx8_mul(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs)
kmp_cmplx80 __kmpc_atomic_cmplx10_mul_cpt(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs, int flag)
kmp_cmplx64 __kmpc_atomic_cmplx8_rd(ident_t *id_ref, int gtid, kmp_cmplx64 *loc)
void __kmpc_atomic_cmplx4_div_cpt(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs, kmp_cmplx32 *out, int flag)
kmp_cmplx64 __kmpc_atomic_cmplx8_add_cpt(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs, int flag)
void __kmpc_atomic_cmplx4_sub_cmplx8(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx64 rhs)
kmp_cmplx80 __kmpc_atomic_cmplx10_div_cpt(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs, int flag)
void __kmpc_atomic_cmplx4_add_cmplx8(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx64 rhs)
kmp_cmplx64 __kmpc_atomic_cmplx8_sub_cpt(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs, int flag)
void __kmpc_atomic_cmplx4_div_cmplx8(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx64 rhs)
void __kmpc_atomic_cmplx4_add_cpt(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs, kmp_cmplx32 *out, int flag)
void __kmpc_atomic_cmplx4_div(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs)
void __kmpc_atomic_cmplx10_sub(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs)
void __kmpc_atomic_cmplx4_mul_cmplx8(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx64 rhs)
void __kmpc_atomic_cmplx10_add(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs)
void __kmpc_atomic_cmplx10_div(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs)
void __kmpc_atomic_cmplx8_wr(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs)
void __kmpc_atomic_cmplx4_wr(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs)
void __kmpc_atomic_cmplx4_mul(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs)
void __kmpc_atomic_cmplx8_div(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs)
void __kmpc_atomic_cmplx8_add(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs)
kmp_cmplx64 __kmpc_atomic_cmplx8_div_cpt(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs, int flag)
void __kmpc_atomic_cmplx8_sub(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs)
kmp_cmplx80 __kmpc_atomic_cmplx10_add_cpt(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs, int flag)
void __kmpc_atomic_cmplx4_sub_cpt(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs, kmp_cmplx32 *out, int flag)
kmp_cmplx64 __kmpc_atomic_cmplx8_mul_cpt(ident_t *id_ref, int gtid, kmp_cmplx64 *lhs, kmp_cmplx64 rhs, int flag)
void __kmpc_atomic_cmplx10_wr(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs)
kmp_cmplx32 __kmpc_atomic_cmplx4_rd(ident_t *id_ref, int gtid, kmp_cmplx32 *loc)
kmp_cmplx80 __kmpc_atomic_cmplx10_rd(ident_t *id_ref, int gtid, kmp_cmplx80 *loc)
void __kmpc_atomic_cmplx4_sub(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs)
void __kmpc_atomic_cmplx4_mul_cpt(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs, kmp_cmplx32 *out, int flag)
void __kmpc_atomic_cmplx4_add(ident_t *id_ref, int gtid, kmp_cmplx32 *lhs, kmp_cmplx32 rhs)
void __kmpc_atomic_cmplx10_mul(ident_t *id_ref, int gtid, kmp_cmplx80 *lhs, kmp_cmplx80 rhs)

Variable Documentation

◆ __kmp_atomic_mode

int __kmp_atomic_mode = 1