ICL_EVENTS(3CPC) CPU Performance Counters Library Functions ICL_EVENTS(3CPC)

NAME


icl_events - processor model specific performance counter events

DESCRIPTION


This manual page describes events specific to the following Intel CPU
models and is derived from Intel's perfmon data. For more information,
please consult the Intel Software Developer's Manual or Intel's perfmon
website.

CPU models described by this document:

+o Family 0x6, Model 0x7e

The following events are supported:

ld_blocks.store_forward
Counts the number of times where store forwarding was prevented for
a load operation. The most common case is a load blocked due to the
address of memory access (partially) overlapping with a preceding
uncompleted store. Note: See the table of not supported store
forwards in the Optimization Guide.

ld_blocks.no_sr
Counts the number of times that split load operations are
temporarily blocked because all resources for handling the split
accesses are in use.

ld_blocks_partial.address_alias
Counts the number of times a load got blocked due to false
dependencies in MOB due to partial compare on address.

dtlb_load_misses.walk_completed_4k
Counts completed page walks (4K sizes) caused by demand data
loads. This implies address translations missed in the DTLB and
further levels of TLB. The page walk can end with or without a
fault.

dtlb_load_misses.walk_completed_2m_4m
Counts completed page walks (2M/4M sizes) caused by demand data
loads. This implies address translations missed in the DTLB and
further levels of TLB. The page walk can end with or without a
fault.

dtlb_load_misses.walk_completed
Counts completed page walks (all page sizes) caused by demand data
loads. This implies it missed in the DTLB and further levels of
TLB. The page walk can end with or without a fault.

dtlb_load_misses.walk_pending
Counts the number of page walks outstanding for a demand load in
the PMH (Page Miss Handler) each cycle.

dtlb_load_misses.walk_active
Counts cycles when at least one PMH (Page Miss Handler) is busy
with a page walk for a demand load.

dtlb_load_misses.stlb_hit
Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second
level TLB).

int_misc.recovery_cycles
Counts core cycles when the Resource allocator was stalled due to
recovery from an earlier branch misprediction or machine clear
event.

int_misc.all_recovery_cycles
Counts cycles the Backend cluster is recovering after a miss-
speculation or a Store Buffer or Load Buffer drain stall.

int_misc.uop_dropping
Estimated number of Top-down Microarchitecture Analysis slots that
got dropped due to non front-end reasons

int_misc.clear_resteer_cycles
Cycles after recovery from a branch misprediction or machine clear
till the first uop is issued from the resteered path.

uops_issued.any
Counts the number of uops that the Resource Allocation Table (RAT)
issues to the Reservation Station (RS).

uops_issued.stall_cycles
Counts cycles during which the Resource Allocation Table (RAT) does
not issue any Uops to the reservation station (RS) for the current
thread.

arith.divider_active
Counts cycles when divide unit is busy executing divide or square
root operations. Accounts for integer and floating-point
operations.

l2_rqsts.demand_data_rd_miss
Counts the number of demand Data Read requests that miss L2 cache.
Only not rejected loads are counted.

l2_rqsts.rfo_miss
Counts the RFO (Read-for-Ownership) requests that miss L2 cache.

l2_rqsts.code_rd_miss
Counts L2 cache misses when fetching instructions.

l2_rqsts.all_demand_miss
Counts demand requests that miss L2 cache.

l2_rqsts.swpf_miss
Counts Software prefetch requests that miss the L2 cache. This
event accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions.

l2_rqsts.demand_data_rd_hit
Counts the number of demand Data Read requests initiated by load
instructions that hit L2 cache.

l2_rqsts.rfo_hit
Counts the RFO (Read-for-Ownership) requests that hit L2 cache.

l2_rqsts.code_rd_hit
Counts L2 cache hits when fetching instructions, code reads.

l2_rqsts.swpf_hit
Counts Software prefetch requests that hit the L2 cache. This event
accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions.

l2_rqsts.all_demand_data_rd
Counts the number of demand Data Read requests (including requests
from L1D hardware prefetchers). These loads may hit or miss L2
cache. Only non rejected loads are counted.

l2_rqsts.all_rfo
Counts the total number of RFO (read for ownership) requests to L2
cache. L2 RFO requests include both L1D demand RFO misses as well
as L1D RFO prefetches.

l2_rqsts.all_code_rd
Counts the total number of L2 code requests.

l2_rqsts.all_demand_references
Counts demand requests to L2 cache.

core_power.lvl0_turbo_license
Counts Core cycles where the core was running with power-delivery
for baseline license level 0. This includes non-AVX codes, SSE,
AVX 128-bit, and low-current AVX 256-bit codes.

core_power.lvl1_turbo_license
Counts Core cycles where the core was running with power-delivery
for license level 1. This includes high current AVX 256-bit
instructions as well as low current AVX 512-bit instructions.

core_power.lvl2_turbo_license
Core cycles where the core was running with power-delivery for
license level 2 (introduced in Skylake Server microarchtecture).
This includes high current AVX 512-bit instructions.

longest_lat_cache.miss
Counts core-originated cacheable requests that miss the L3 cache
(Longest Latency cache). Requests include data and code reads,
Reads-for-Ownership (RFOs), speculative accesses and hardware
prefetches from L1 and L2. It does not include all misses to the
L3.

sw_prefetch_access.nta
Counts the number of PREFETCHNTA instructions executed.

sw_prefetch_access.t0
Counts the number of PREFETCHT0 instructions executed.

sw_prefetch_access.t1_t2
Counts the number of PREFETCHT1 or PREFETCHT2 instructions
executed.

sw_prefetch_access.prefetchw
Counts the number of PREFETCHW instructions executed.

cpu_clk_unhalted.thread_p
This is an architectural event that counts the number of thread
cycles while the thread is not in a halt state. The thread enters
the halt state when it is running the HLT instruction. The core
frequency may change from time to time due to power or thermal
throttling. For this reason, this event may have a changing ratio
with regards to wall clock time.

cpu_clk_unhalted.ref_xclk
Counts core crystal clock cycles when the thread is unhalted.

cpu_clk_unhalted.one_thread_active
Counts Core crystal clock cycles when current thread is unhalted
and the other thread is halted.

cpu_clk_unhalted.ref_distributed
This event distributes Core crystal clock cycle counts between
active hyperthreads, i.e., those in C0 sleep-state. A hyperthread
becomes inactive when it executes the HLT or MWAIT instructions. If
one thread is active in a core, all counts are attributed to this
hyperthread. To obtain the full count when the Core is active, sum
the counts from each hyperthread.

l1d_pend_miss.pending
Counts number of L1D misses that are outstanding in each cycle,
that is each cycle the number of Fill Buffers (FB) outstanding
required by Demand Reads. FB either is held by demand loads, or it
is held by non-demand loads and gets hit at least once by demand.
The valid outstanding interval is defined until the FB deallocation
by one of the following ways: from FB allocation, if FB is
allocated by demand from the demand Hit FB, if it is allocated by
hardware or software prefetch. Note: In the L1D, a Demand Read
contains cacheable or noncacheable demand loads, including ones
causing cache-line splits and reads due to page walks resulted from
any request type.

l1d_pend_miss.pending_cycles
Counts duration of L1D miss outstanding in cycles.

l1d_pend_miss.fb_full
Counts number of cycles a demand request has waited due to L1D Fill
Buffer (FB) unavailablability. Demand requests include
cacheable/uncacheable demand load, store, lock or SW prefetch
accesses.

l1d_pend_miss.fb_full_periods
Counts number of phases a demand request has waited due to L1D Fill
Buffer (FB) unavailablability. Demand requests include
cacheable/uncacheable demand load, store, lock or SW prefetch
accesses.

l1d_pend_miss.l2_stall
Counts number of cycles a demand request has waited due to L1D due
to lack of L2 resources. Demand requests include
cacheable/uncacheable demand load, store, lock or SW prefetch
accesses.

dtlb_store_misses.walk_completed_4k
Counts completed page walks (4K sizes) caused by demand data
stores. This implies address translations missed in the DTLB and
further levels of TLB. The page walk can end with or without a
fault.

dtlb_store_misses.walk_completed_2m_4m
Counts completed page walks (2M/4M sizes) caused by demand data
stores. This implies address translations missed in the DTLB and
further levels of TLB. The page walk can end with or without a
fault.

dtlb_store_misses.walk_completed
Counts completed page walks (all page sizes) caused by demand data
stores. This implies it missed in the DTLB and further levels of
TLB. The page walk can end with or without a fault.

dtlb_store_misses.walk_pending
Counts the number of page walks outstanding for a store in the PMH
(Page Miss Handler) each cycle.

dtlb_store_misses.walk_active
Counts cycles when at least one PMH (Page Miss Handler) is busy
with a page walk for a store.

dtlb_store_misses.stlb_hit
Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd
Level TLB).

load_hit_prefetch.swpf
Counts all not software-prefetch load dispatches that hit the fill
buffer (FB) allocated for the software prefetch. It can also be
incremented by some lock instructions. So it should only be used
with profiling so that the locks can be excluded by ASM (Assembly
File) inspection of the nearby instructions.

l1d.replacement
Counts L1D data line replacements including opportunistic
replacements, and replacements that require stall-for-replace or
block-for-replace.

tx_mem.abort_conflict
Counts the number of times a TSX line had a cache conflict.

tx_mem.abort_capacity_write
Speculatively counts the number of Transactional Synchronization
Extensions (TSX) aborts due to a data capacity limitation for
transactional writes.

tx_mem.abort_hle_store_to_elided_lock
Counts the number of times a TSX Abort was triggered due to a non-
release/commit store to lock.

tx_mem.abort_hle_elision_buffer_not_empty
Counts the number of times a TSX Abort was triggered due to commit
but Lock Buffer not empty.

tx_mem.abort_hle_elision_buffer_mismatch
Counts the number of times a TSX Abort was triggered due to
release/commit but data and address mismatch.

tx_mem.abort_hle_elision_buffer_unsupported_alignment
Counts the number of times a TSX Abort was triggered due to
attempting an unsupported alignment from Lock Buffer.

tx_mem.hle_elision_buffer_full
Counts the number of times we could not allocate Lock Buffer.

tx_mem.abort_capacity_read
Speculatively counts the number of Transactional Synchronization
Extensions (TSX) aborts due to a data capacity limitation for
transactional reads

tx_exec.misc2
Counts Unfriendly TSX abort triggered by a vzeroupper instruction.

tx_exec.misc3
Counts Unfriendly TSX abort triggered by a nest count that is too
deep.

rs_events.empty_cycles
Counts cycles during which the reservation station (RS) is empty
for this logical processor. This is usually caused when the front-
end pipeline runs into stravation periods (e.g. branch
mispredictions or i-cache misses)

rs_events.empty_end
Counts end of periods where the Reservation Station (RS) was empty.
Could be useful to closely sample on front-end latency issues (see
the FRONTEND_RETIRED event of designated precise events)

offcore_requests_outstanding.demand_data_rd
Counts the number of off-core outstanding Demand Data Read
transactions every cycle. A transaction is considered to be in the
Off-core outstanding state between L2 cache miss and data-return to
the core.

offcore_requests_outstanding.demand_rfo
Counts the number of off-core outstanding read-for-ownership (RFO)
store transactions every cycle. An RFO transaction is considered to
be in the Off-core outstanding state between L2 cache miss and
transaction completion.

offcore_requests_outstanding.cycles_with_demand_rfo
Counts the number of offcore outstanding demand rfo Reads
transactions in the super queue every cycle. The 'Offcore
outstanding' state of the transaction lasts from the L2 miss until
the sending transaction completion to requestor (SQ deallocation).
See the corresponding Umask under OFFCORE_REQUESTS.

offcore_requests_outstanding.all_data_rd
Counts the number of offcore outstanding cacheable Core Data Read
transactions in the super queue every cycle. A transaction is
considered to be in the Offcore outstanding state between L2 miss
and transaction completion sent to requestor (SQ de-allocation).
See corresponding Umask under OFFCORE_REQUESTS.

offcore_requests_outstanding.cycles_with_data_rd
Counts cycles when offcore outstanding cacheable Core Data Read
transactions are present in the super queue. A transaction is
considered to be in the Offcore outstanding state between L2 miss
and transaction completion sent to requestor (SQ de-allocation).
See corresponding Umask under OFFCORE_REQUESTS.

offcore_requests_outstanding.cycles_with_l3_miss_demand_data_rd
Cycles with at least 1 Demand Data Read requests who miss L3 cache
in the superQ.

idq.mite_uops
Counts the number of uops delivered to Instruction Decode Queue
(IDQ) from the MITE path. This also means that uops are not being
delivered from the Decode Stream Buffer (DSB).

idq.mite_cycles_ok
Counts the number of cycles where optimal number of uops was
delivered to the Instruction Decode Queue (IDQ) from the MITE
(legacy decode pipeline) path. During these cycles uops are not
being delivered from the Decode Stream Buffer (DSB).

idq.mite_cycles_any
Counts the number of cycles uops were delivered to the Instruction
Decode Queue (IDQ) from the MITE (legacy decode pipeline) path.
During these cycles uops are not being delivered from the Decode
Stream Buffer (DSB).

idq.dsb_uops
Counts the number of uops delivered to Instruction Decode Queue
(IDQ) from the Decode Stream Buffer (DSB) path.

idq.dsb_cycles_ok
Counts the number of cycles where optimal number of uops was
delivered to the Instruction Decode Queue (IDQ) from the MITE
(legacy decode pipeline) path. During these cycles uops are not
being delivered from the Decode Stream Buffer (DSB).

idq.dsb_cycles_any
Counts the number of cycles uops were delivered to Instruction
Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.

idq.ms_switches
Number of switches from DSB (Decode Stream Buffer) or MITE (legacy
decode pipeline) to the Microcode Sequencer.

idq.ms_uops
Counts the total number of uops delivered by the Microcode
Sequencer (MS). Any instruction over 4 uops will be delivered by
the MS. Some instructions such as transcendentals may additionally
generate uops from the MS.

idq.ms_cycles_any
Counts cycles during which uops are being delivered to Instruction
Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops
maybe initiated by Decode Stream Buffer (DSB) or MITE.

icache_16b.ifdata_stall
Counts cycles where a code line fetch is stalled due to an L1
instruction cache miss. The legacy decode pipeline works at a 16
Byte granularity.

icache_64b.iftag_hit
Counts instruction fetch tag lookups that hit in the instruction
cache (L1I). Counts at 64-byte cache-line granularity. Accounts for
both cacheable and uncacheable accesses.

icache_64b.iftag_miss
Counts instruction fetch tag lookups that miss in the instruction
cache (L1I). Counts at 64-byte cache-line granularity. Accounts for
both cacheable and uncacheable accesses.

icache_64b.iftag_stall
Counts cycles where a code fetch is stalled due to L1 instruction
cache tag miss.

itlb_misses.walk_completed_4k
Counts completed page walks (4K page sizes) caused by a code fetch.
This implies it missed in the ITLB (Instruction TLB) and further
levels of TLB. The page walk can end with or without a fault.

itlb_misses.walk_completed_2m_4m
Counts completed page walks (2M/4M page sizes) caused by a code
fetch. This implies it missed in the ITLB (Instruction TLB) and
further levels of TLB. The page walk can end with or without a
fault.

itlb_misses.walk_completed
Counts completed page walks (all page sizes) caused by a code
fetch. This implies it missed in the ITLB (Instruction TLB) and
further levels of TLB. The page walk can end with or without a
fault.

itlb_misses.walk_pending
Counts the number of page walks outstanding for an outstanding code
(instruction fetch) request in the PMH (Page Miss Handler) each
cycle.

itlb_misses.walk_active
Counts cycles when at least one PMH (Page Miss Handler) is busy
with a page walk for a code (instruction fetch) request.

itlb_misses.stlb_hit
Counts instruction fetch requests that miss the ITLB (Instruction
TLB) and hit the STLB (Second-level TLB).

ild_stall.lcp
Counts cycles that the Instruction Length decoder (ILD) stalls
occurred due to dynamically changing prefix length of the decoded
instruction (by operand size prefix instruction 0x66, address size
prefix instruction 0x67 or REX.W for Intel64). Count is
proportional to the number of prefixes in a 16B-line. This may
result in a three-cycle penalty for each LCP (Length changing
prefix) in a 16-byte chunk.

idq_uops_not_delivered.core
Counts the number of uops not delivered to by the Instruction
Decode Queue (IDQ) to the back-end of the pipeline when there was
no back-end stalls. This event counts for one SMT thread in a given
cycle.

idq_uops_not_delivered.cycles_0_uops_deliv.core
Counts the number of cycles when no uops were delivered by the
Instruction Decode Queue (IDQ) to the back-end of the pipeline when
there was no back-end stalls. This event counts for one SMT thread
in a given cycle.

idq_uops_not_delivered.cycles_fe_was_ok
Counts the number of cycles when the optimal number of uops were
delivered by the Instruction Decode Queue (IDQ) to the back-end of
the pipeline when there was no back-end stalls. This event counts
for one SMT thread in a given cycle.

uops_dispatched.port_0
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to port 0.

uops_dispatched.port_1
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to port 1.

uops_dispatched.port_2_3
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to ports 2 and
3.

uops_dispatched.port_4_9
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to ports 5 and
9.

uops_dispatched.port_5
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to port 5.

uops_dispatched.port_6
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to port 6.

uops_dispatched.port_7_8
Counts, on the per-thread basis, cycles during which at least one
uop is dispatched from the Reservation Station (RS) to ports 7 and
8.

resource_stalls.scoreboard
Counts cycles where the pipeline is stalled due to serializing
operations.

resource_stalls.sb
Counts allocation stall cycles caused by the store buffer (SB)
being full. This counts cycles that the pipeline back-end blocked
uop delivery from the front-end.

cycle_activity.cycles_l2_miss
Cycles while L2 cache miss demand load is outstanding.

cycle_activity.cycles_l3_miss
Cycles while L3 cache miss demand load is outstanding.

cycle_activity.stalls_total
Total execution stalls.

cycle_activity.stalls_l2_miss
Execution stalls while L2 cache miss demand load is outstanding.

cycle_activity.stalls_l3_miss
Execution stalls while L3 cache miss demand load is outstanding.

cycle_activity.cycles_l1d_miss
Cycles while L1 cache miss demand load is outstanding.

cycle_activity.stalls_l1d_miss
Execution stalls while L1 cache miss demand load is outstanding.

cycle_activity.cycles_mem_any
Cycles while memory subsystem has an outstanding load.

cycle_activity.stalls_mem_any
Execution stalls while memory subsystem has an outstanding load.

topdown.slots_p
Counts the number of available slots for an unhalted logical
processor. The event increments by machine-width of the narrowest
pipeline as employed by the Top-down Microarchitecture Analysis
method. The count is distributed among unhalted logical processors
(hyper-threads) who share the same physical core.

topdown.backend_bound_slots
Counts the number of Top-down Microarchitecture Analysis (TMA)
method's slots where no micro-operations were being issued from
front-end to back-end of the machine due to lack of back-end
resources.

topdown.br_mispredict_slots
Number of TMA slots that were wasted due to incorrect speculation
by branch mispredictions. This event estimates number of operations
that were issued but not retired from the specualtive path as well
as the out-of-order engine recovery past a branch misprediction.

exe_activity.1_ports_util
Counts cycles during which a total of 1 uop was executed on all
ports and Reservation Station (RS) was not empty.

exe_activity.2_ports_util
Counts cycles during which a total of 2 uops were executed on all
ports and Reservation Station (RS) was not empty.

exe_activity.3_ports_util
Cycles total of 3 uops are executed on all ports and Reservation
Station (RS) was not empty.

exe_activity.4_ports_util
Cycles total of 4 uops are executed on all ports and Reservation
Station (RS) was not empty.

exe_activity.bound_on_stores
Counts cycles where the Store Buffer was full and no loads caused
an execution stall.

exe_activity.exe_bound_0_ports
Counts cycles during which no uops were executed on all ports and
Reservation Station (RS) was not empty.

lsd.uops
Counts the number of uops delivered to the back-end by the LSD(Loop
Stream Detector).

lsd.cycles_active
Counts the cycles when at least one uop is delivered by the LSD
(Loop-stream detector).

lsd.cycles_ok
Counts the cycles when optimal number of uops is delivered by the
LSD (Loop-stream detector).

dsb2mite_switches.penalty_cycles
Decode Stream Buffer (DSB) is a Uop-cache that holds translations
of previously fetched instructions that were decoded by the legacy
x86 decode pipeline (MITE). This event counts fetch penalty cycles
when a transition occurs from DSB to MITE.

dsb2mite_switches.count
Counts the number of Decode Stream Buffer (DSB a.k.a. Uop
Cache)-to-MITE speculative transitions.

offcore_requests.demand_data_rd
Counts the Demand Data Read requests sent to uncore. Use it in
conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average
latency in the uncore.

offcore_requests.demand_rfo
Counts the demand RFO (read for ownership) requests including
regular RFOs, locks, ItoM.

offcore_requests.all_data_rd
Counts the demand and prefetch data reads. All Core Data Reads
include cacheable 'Demands' and L2 prefetchers (not L3
prefetchers). Counting also covers reads due to page walks resulted
from any request type.

offcore_requests.l3_miss_demand_data_rd
Demand Data Read requests who miss L3 cache.

offcore_requests.all_requests
Counts memory transactions reached the super queue including
requests initiated by the core, all L3 prefetches, page walks,
etc..

uops_executed.thread
Counts the number of uops to be executed per-thread each cycle.

uops_executed.stall_cycles
Counts cycles during which no uops were dispatched from the
Reservation Station (RS) per thread.

uops_executed.cycles_ge_1
Cycles where at least 1 uop was executed per-thread.

uops_executed.cycles_ge_2
Cycles where at least 2 uops were executed per-thread.

uops_executed.cycles_ge_3
Cycles where at least 3 uops were executed per-thread.

uops_executed.cycles_ge_4
Cycles where at least 4 uops were executed per-thread.

uops_executed.core
Counts the number of uops executed from any thread.

uops_executed.core_cycles_ge_1
Counts cycles when at least 1 micro-op is executed from any thread
on physical core.

uops_executed.core_cycles_ge_2
Counts cycles when at least 2 micro-ops are executed from any
thread on physical core.

uops_executed.core_cycles_ge_3
Counts cycles when at least 3 micro-ops are executed from any
thread on physical core.

uops_executed.core_cycles_ge_4
Counts cycles when at least 4 micro-ops are executed from any
thread on physical core.

uops_executed.x87
Counts the number of x87 uops executed.

tlb_flush.dtlb_thread
Counts the number of DTLB flush attempts of the thread-specific
entries.

tlb_flush.stlb_any
Counts the number of any STLB flush attempts (such as entire, VPID,
PCID, InvPage, CR3 write, etc.).

inst_retired.any_p
Counts the number of X86 instructions retired - an Architectural
PerfMon event. Counting continues during hardware interrupts,
traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is
counted by a designated fixed counter freeing up programmable
counters to count other events. INST_RETIRED.ANY_P is counted by a
programmable counter.

inst_retired.stall_cycles
This event counts cycles without actually retired instructions.

assists.fp
Counts all microcode Floating Point assists.

assists.any
Counts the number of occurrences where a microcode assist is
invoked by hardware Examples include AD (page Access Dirty), FP and
AVX related assists.

uops_retired.stall_cycles
This event counts cycles without actually retired uops.

uops_retired.total_cycles
Counts the number of cycles using always true condition (uops_ret
< 16) applied to non PEBS uops retired event.

uops_retired.slots
Counts the retirement slots used each cycle.

machine_clears.count
Counts the number of machine clears (nukes) of any type.

machine_clears.memory_ordering
Counts the number of Machine Clears detected dye to memory
ordering. Memory Ordering Machine Clears may apply when a memory
read may not conform to the memory ordering rules of the x86
architecture

machine_clears.smc
Counts self-modifying code (SMC) detected, which causes a machine
clear.

br_inst_retired.all_branches
Counts all branch instructions retired.

br_inst_retired.cond_taken
Counts taken conditional branch instructions retired.

br_inst_retired.near_call
Counts both direct and indirect near call instructions retired.

br_inst_retired.near_return
Counts return instructions retired.

br_inst_retired.cond_ntaken
Counts not taken branch instructions retired.

br_inst_retired.cond
Counts conditional branch instructions retired.

br_inst_retired.near_taken
Counts taken branch instructions retired.

br_inst_retired.far_branch
Counts far branch instructions retired.

br_inst_retired.indirect
Counts all indirect branch instructions retired (excluding RETs.
TSX aborts is considered indirect branch).

br_misp_retired.all_branches
Counts all the retired branch instructions that were mispredicted
by the processor. A branch misprediction occurs when the processor
incorrectly predicts the destination of the branch. When the
misprediction is discovered at execution, all the instructions
executed in the wrong (speculative) path must be discarded, and the
processor must start fetching from the correct path.

br_misp_retired.cond_taken
Counts taken conditional mispredicted branch instructions retired.

br_misp_retired.indirect_call
Counts retired mispredicted indirect (near taken) CALL
instructions, including both register and memory indirect.

br_misp_retired.cond_ntaken
Counts the number of conditional branch instructions retired that
were mispredicted and the branch direction was not taken.

br_misp_retired.cond
Counts mispredicted conditional branch instructions retired.

br_misp_retired.near_taken
Counts number of near branch instructions retired that were
mispredicted and taken.

br_misp_retired.indirect
Counts all miss-predicted indirect branch instructions retired
(excluding RETs. TSX aborts is considered indirect branch).

fp_arith_inst_retired.scalar_double
Counts number of SSE/AVX computational scalar double precision
floating-point instructions retired; some instructions will count
twice as noted below. Each count represents 1 computational
operation. Applies to SSE* and AVX* scalar double precision
floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT
FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they
perform 2 calculations per element.

fp_arith_inst_retired.scalar_single
Counts number of SSE/AVX computational scalar single precision
floating-point instructions retired; some instructions will count
twice as noted below. Each count represents 1 computational
operation. Applies to SSE* and AVX* scalar single precision
floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP
FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they
perform 2 calculations per element.

fp_arith_inst_retired.128b_packed_double
Counts number of SSE/AVX computational 128-bit packed double
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 2
computation operations, one for each element. Applies to SSE* and
AVX* packed double precision floating-point instructions: ADD SUB
HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and
FM(N)ADD/SUB instructions count twice as they perform 2
calculations per element.

fp_arith_inst_retired.128b_packed_single
Counts number of SSE/AVX computational 128-bit packed single
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 4
computation operations, one for each element. Applies to SSE* and
AVX* packed single precision floating-point instructions: ADD SUB
HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.
DPP and FM(N)ADD/SUB instructions count twice as they perform 2
calculations per element.

fp_arith_inst_retired.256b_packed_double
Counts number of SSE/AVX computational 256-bit packed double
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 4
computation operations, one for each element. Applies to SSE* and
AVX* packed double precision floating-point instructions: ADD SUB
HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB
instructions count twice as they perform 2 calculations per
element.

fp_arith_inst_retired.256b_packed_single
Counts number of SSE/AVX computational 256-bit packed single
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 8
computation operations, one for each element. Applies to SSE* and
AVX* packed single precision floating-point instructions: ADD SUB
HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.
DPP and FM(N)ADD/SUB instructions count twice as they perform 2
calculations per element.

fp_arith_inst_retired.512b_packed_double
Counts number of SSE/AVX computational 512-bit packed double
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 8
computation operations, one for each element. Applies to SSE* and
AVX* packed double precision floating-point instructions: ADD SUB
MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB
instructions count twice as they perform 2 calculations per
element.

fp_arith_inst_retired.512b_packed_single
Counts number of SSE/AVX computational 512-bit packed double
precision floating-point instructions retired; some instructions
will count twice as noted below. Each count represents 16
computation operations, one for each element. Applies to SSE* and
AVX* packed double precision floating-point instructions: ADD SUB
MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB
instructions count twice as they perform 2 calculations per
element.

hle_retired.start
Counts the number of times we entered an HLE region. Does not count
nested transactions.

hle_retired.commit
Counts the number of times HLE commit succeeded.

hle_retired.aborted
Counts the number of times HLE abort was triggered.

hle_retired.aborted_mem
Counts the number of times an HLE execution aborted due to various
memory events (e.g., read/write capacity and conflicts).

hle_retired.aborted_unfriendly
Counts the number of times an HLE execution aborted due to HLE-
unfriendly instructions and certain unfriendly events (such as AD
assists etc.).

hle_retired.aborted_events
Counts the number of times an HLE execution aborted due to
unfriendly events (such as interrupts).

rtm_retired.start
Counts the number of times we entered an RTM region. Does not count
nested transactions.

rtm_retired.commit
Counts the number of times RTM commit succeeded.

rtm_retired.aborted
Counts the number of times RTM abort was triggered.

rtm_retired.aborted_mem
Counts the number of times an RTM execution aborted due to various
memory events (e.g. read/write capacity and conflicts).

rtm_retired.aborted_unfriendly
Counts the number of times an RTM execution aborted due to HLE-
unfriendly instructions.

rtm_retired.aborted_memtype
Counts the number of times an RTM execution aborted due to
incompatible memory type.

rtm_retired.aborted_events
Counts the number of times an RTM execution aborted due to none of
the previous 4 categories (e.g. interrupt).

misc_retired.lbr_inserts
Increments when an entry is added to the Last Branch Record (LBR)
array (or removed from the array in case of RETURNs in call stack
mode). The event requires LBR enable via IA32_DEBUGCTL MSR and
branch type selection via MSR_LBR_SELECT.

misc_retired.pause_inst
Counts number of retired PAUSE instructions. This event is not
supported on first SKL and KBL products.

mem_inst_retired.stlb_miss_loads
Counts retired load instructions that true miss the STLB.

mem_inst_retired.stlb_miss_stores
Counts retired store instructions that true miss the STLB.

mem_inst_retired.lock_loads
Counts retired load instructions with locked access.

mem_inst_retired.split_loads
Counts retired load instructions that split across a cacheline
boundary.

mem_inst_retired.split_stores
Counts retired store instructions that split across a cacheline
boundary.

mem_inst_retired.all_loads
Counts all retired load instructions. This event accounts for SW
prefetch instructions for loads.

mem_inst_retired.all_stores
Counts all retired store instructions. This event account for SW
prefetch instructions and PREFETCHW instruction for stores.

mem_load_retired.l1_hit
Counts retired load instructions with at least one uop that hit in
the L1 data cache. This event includes all SW prefetches and lock
instructions regardless of the data source.

mem_load_retired.l2_hit
Counts retired load instructions with L2 cache hits as data
sources.

mem_load_retired.l3_hit
Counts retired load instructions with at least one uop that hit in
the L3 cache.

mem_load_retired.l1_miss
Counts retired load instructions with at least one uop that missed
in the L1 cache.

mem_load_retired.l2_miss
Counts retired load instructions missed L2 cache as data sources.

mem_load_retired.l3_miss
Counts retired load instructions with at least one uop that missed
in the L3 cache.

mem_load_retired.fb_hit
Counts retired load instructions with at least one uop was load
missed in L1 but hit FB (Fill Buffers) due to preceding miss to the
same cache line with data not ready.

mem_load_l3_hit_retired.xsnp_miss
Counts the retired load instructions whose data sources were L3 hit
and cross-core snoop missed in on-pkg core cache.

mem_load_l3_hit_retired.xsnp_hit
Counts retired load instructions whose data sources were L3 and
cross-core snoop hits in on-pkg core cache.

mem_load_l3_hit_retired.xsnp_hitm
Counts retired load instructions whose data sources were HitM
responses from shared L3.

mem_load_l3_hit_retired.xsnp_none
Counts retired load instructions whose data sources were hits in L3
without snoops required.

baclears.any
Counts the number of times the front-end is resteered when it finds
a branch instruction in a fetch line. This occurs for the first
time a branch instruction is fetched or when the branch is not
tracked by the BPU (Branch Prediction Unit) anymore.

cpu_clk_unhalted.distributed
This event distributes cycle counts between active hyperthreads,
i.e., those in C0. A hyperthread becomes inactive when it executes
the HLT or MWAIT instructions. If all other hyperthreads are
inactive (or disabled or do not exist), all counts are attributed
to this hyperthread. To obtain the full count when the Core is
active, sum the counts from each hyperthread.

l2_trans.l2_wb
Counts L2 writebacks that access L2 cache.

l2_lines_in.all
Counts the number of L2 cache lines filling the L2. Counting does
not cover rejects.

l2_lines_out.silent
Counts the number of lines that are silently dropped by L2 cache
when triggered by an L2 cache fill. These lines are typically in
Shared or Exclusive state. A non-threaded event.

l2_lines_out.non_silent
Counts the number of lines that are evicted by L2 cache when
triggered by an L2 cache fill. Those lines are in Modified state.
Modified lines are written back to L3

l2_lines_out.useless_hwpf
Counts the number of cache lines that have been prefetched by the
L2 hardware prefetcher but not used by demand access when evicted
from the L2 cache

sq_misc.sq_full
Counts the cycles for which the thread is active and the superQ
cannot take any more entries.

SEE ALSO


cpc(3CPC)

https://download.01.org/perfmon/index/

illumos June 18, 2018 illumos