JKT_EVENTS(3CPC) CPU Performance Counters Library Functions JKT_EVENTS(3CPC)

NAME


jkt_events - processor model specific performance counter events

DESCRIPTION


This manual page describes events specific to the following Intel CPU
models and is derived from Intel's perfmon data. For more information,
please consult the Intel Software Developer's Manual or Intel's perfmon
website.

CPU models described by this document:

+o Family 0x6, Model 0x2d

The following events are supported:

br_inst_exec.nontaken_conditional
Not taken macro-conditional branches.

br_inst_exec.taken_conditional
Taken speculative and retired macro-conditional branches.

br_inst_exec.taken_direct_jump
Taken speculative and retired macro-conditional branch instructions
excluding calls and indirects.

br_inst_exec.taken_indirect_jump_non_call_ret
Taken speculative and retired indirect branches excluding calls and
returns.

br_inst_exec.taken_indirect_near_return
Taken speculative and retired indirect branches with return
mnemonic.

br_inst_exec.taken_direct_near_call
Taken speculative and retired direct near calls.

br_inst_exec.taken_indirect_near_call
Taken speculative and retired indirect calls.

br_inst_exec.all_conditional
Speculative and retired macro-conditional branches.

br_inst_exec.all_direct_jmp
Speculative and retired macro-unconditional branches excluding
calls and indirects.

br_inst_exec.all_indirect_jump_non_call_ret
Speculative and retired indirect branches excluding calls and
returns.

br_inst_exec.all_indirect_near_return
Speculative and retired indirect return branches.

br_inst_exec.all_direct_near_call
Speculative and retired direct near calls.

br_misp_exec.nontaken_conditional
Not taken speculative and retired mispredicted macro conditional
branches.

br_misp_exec.taken_conditional
Taken speculative and retired mispredicted macro conditional
branches.

br_misp_exec.taken_indirect_jump_non_call_ret
Taken speculative and retired mispredicted indirect branches
excluding calls and returns.

br_misp_exec.taken_return_near
Taken speculative and retired mispredicted indirect branches with
return mnemonic.

br_misp_exec.taken_direct_near_call
Taken speculative and retired mispredicted direct near calls.

br_misp_exec.taken_indirect_near_call
Taken speculative and retired mispredicted indirect calls.

br_misp_exec.all_conditional
Speculative and retired mispredicted macro conditional branches.

br_misp_exec.all_indirect_jump_non_call_ret
Mispredicted indirect branches excluding calls and returns.

br_misp_exec.all_direct_near_call
Speculative and retired mispredicted direct near calls.

cpu_clk_unhalted.thread_p
Thread cycles when thread is not in halt state.

itlb.itlb_flush
Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M
pages.

icache.hit
Number of Instruction Cache, Streaming Buffer and Victim Cache
Reads. both cacheable and noncacheable, including UC fetches.

icache.misses
This event counts the number of instruction cache, streaming buffer
and victim cache misses. Counting includes unchacheable accesses.

lsd.uops
Number of Uops delivered by the LSD.

lsd.cycles_active
Cycles Uops delivered by the LSD, but didn't come from the decoder.

ild_stall.lcp
Stalls caused by changing prefix length of the instruction.

ild_stall.iq_full
Stall cycles because IQ is full.

insts_written_to_iq.insts
Valid instructions written to IQ per cycle.

idq.empty
Instruction Decode Queue (IDQ) empty cycles.

idq.mite_uops
Uops delivered to Instruction Decode Queue (IDQ) from MITE path.

idq.dsb_uops
Uops delivered to Instruction Decode Queue (IDQ) from the Decode
Stream Buffer (DSB) path.

idq.ms_dsb_uops
Uops initiated by Decode Stream Buffer (DSB) that are being
delivered to Instruction Decode Queue (IDQ) while Microcode
Sequenser (MS) is busy.

idq.ms_mite_uops
Uops initiated by MITE and delivered to Instruction Decode Queue
(IDQ) while Microcode Sequenser (MS) is busy.

idq.ms_uops
Uops delivered to Instruction Decode Queue (IDQ) while Microcode
Sequenser (MS) is busy.

idq.ms_cycles
This event counts cycles during which the microcode sequencer
assisted the front-end in delivering uops. Microcode assists are
used for complex instructions or scenarios that can't be handled by
the standard decoder. Using other instructions, if possible, will
usually improve performance. See the Intel? 64 and IA-32
Architectures Optimization Reference Manual for more information.

idq_uops_not_delivered.core
This event counts the number of uops not delivered to the back-end
per cycle, per thread, when the back-end was not stalled. In the
ideal case 4 uops can be delivered each cycle. The event counts
the undelivered uops - so if 3 were delivered in one cycle, the
counter would be incremented by 1 for that cycle (4 - 3). If the
back-end is stalled, the count for this event is not incremented
even when uops were not delivered, because the back-end would not
have been able to accept them. This event is used in determining
the front-end bound category of the top-down pipeline slots
characterization.

idq_uops_not_delivered.cycles_0_uops_deliv.core
Cycles per thread when 4 or more uops are not delivered to Resource
Allocation Table (RAT) when backend of the machine is not stalled.

idq_uops_not_delivered.cycles_le_1_uop_deliv.core
Cycles per thread when 3 or more uops are not delivered to Resource
Allocation Table (RAT) when backend of the machine is not stalled.

dsb2mite_switches.count
Decode Stream Buffer (DSB)-to-MITE switches.

dsb2mite_switches.penalty_cycles
This event counts the cycles attributed to a switch from the
Decoded Stream Buffer (DSB), which holds decoded instructions, to
the legacy decode pipeline. It excludes cycles when the back-end
cannot accept new micro-ops. The penalty for these switches is
potentially several cycles of instruction starvation, where no
micro-ops are delivered to the back-end.

dsb_fill.other_cancel
Cases of cancelling valid DSB fill not because of exceeding way
limit.

dsb_fill.exceed_dsb_lines
Cycles when Decode Stream Buffer (DSB) fill encounter more than 3
Decode Stream Buffer (DSB) lines.

int_misc.rat_stall_cycles
Cycles when Resource Allocation Table (RAT) external stall is sent
to Instruction Decode Queue (IDQ) for the thread.

partial_rat_stalls.flags_merge_uop
Increments the number of flags-merge uops in flight each cycle.

partial_rat_stalls.slow_lea_window
This event counts the number of cycles with at least one slow LEA
uop being allocated. A uop is generally considered as slow LEA if
it has three sources (for example, two sources and immediate)
regardless of whether it is a result of LEA instruction or not.
Examples of the slow LEA uop are or uops with base, index, and
offset source operands using base and index reqisters, where base
is EBR/RBP/R13, using RIP relative or 16-bit addressing modes. See
the Intel? 64 and IA-32 Architectures Optimization Reference Manual
for more details about slow LEA instructions.

partial_rat_stalls.mul_single_uop
Multiply packed/scalar single precision uops allocated.

resource_stalls.any
Resource-related stall cycles.

resource_stalls.lb
Counts the cycles of stall due to lack of load buffers.

resource_stalls.rs
Cycles stalled due to no eligible RS entry available.

resource_stalls.sb
Cycles stalled due to no store buffers available. (not including
draining form sync).

resource_stalls.rob
Cycles stalled due to re-order buffer full.

resource_stalls2.bob_full
Cycles when Allocator is stalled if BOB is full and new branch
needs it.

uops_issued.any
This event counts the number of Uops issued by the front-end of the
pipeilne to the back-end.

uops_issued.stall_cycles
Cycles when Resource Allocation Table (RAT) does not issue Uops to
Reservation Station (RS) for the thread.

uops_issued.core_stall_cycles
Cycles when Resource Allocation Table (RAT) does not issue Uops to
Reservation Station (RS) for all threads.

rs_events.empty_cycles
Cycles when Reservation Station (RS) is empty for the thread.

cpl_cycles.ring0
Unhalted core cycles when the thread is in ring 0.

cpl_cycles.ring0_trans
Number of intervals between processor halts while thread is in ring
0.

cpl_cycles.ring123
Unhalted core cycles when thread is in rings 1, 2, or 3.

rob_misc_events.lbr_inserts
Count cases of saving new LBR.

machine_clears.memory_ordering
This event counts the number of memory ordering Machine Clears
detected. Memory Ordering Machine Clears can result from memory
disambiguation, external snoops, or cross SMT-HW-thread snoop
(stores) hitting load buffers. Machine clears can have a
significant performance impact if they are happening frequently.

machine_clears.smc
This event is incremented when self-modifying code (SMC) is
detected, which causes a machine clear. Machine clears can have a
significant performance impact if they are happening frequently.

machine_clears.maskmov
Maskmov false fault - counts number of time ucode passes through
Maskmov flow due to instruction's mask being 0 while the flow was
completed without raising a fault.

inst_retired.any_p
Number of instructions retired. General Counter - architectural
event.

uops_retired.all
This event counts the number of micro-ops retired.

uops_retired.retire_slots
This event counts the number of retirement slots used each cycle.
There are potentially 4 slots that can be used each cycle -
meaning, 4 micro-ops or 4 instructions could retire each cycle.
This event is used in determining the 'Retiring' category of the
Top-Down pipeline slots characterization.

uops_retired.stall_cycles
Cycles without actually retired uops.

uops_retired.total_cycles
Cycles with less than 10 actually retired uops.

br_inst_retired.conditional
Conditional branch instructions retired.

br_inst_retired.near_call
Direct and indirect near call instructions retired.

br_inst_retired.all_branches
All (macro) branch instructions retired.

br_inst_retired.near_return
Return instructions retired.

br_inst_retired.not_taken
Not taken branch instructions retired.

br_inst_retired.near_taken
Taken branch instructions retired.

br_inst_retired.far_branch
Far branch instructions retired.

br_inst_retired.all_branches_pebs
All (macro) branch instructions retired. (Precise Event - PEBS).

br_misp_retired.conditional
Mispredicted conditional branch instructions retired.

br_misp_retired.near_call
Direct and indirect mispredicted near call instructions retired.

br_misp_retired.all_branches
All mispredicted macro branch instructions retired.

br_misp_retired.not_taken
Mispredicted not taken branch instructions retired.

br_misp_retired.taken
Mispredicted taken branch instructions retired.

br_misp_retired.all_branches_pebs
Mispredicted macro branch instructions retired. (Precise Event -
PEBS)

other_assists.itlb_miss_retired
Retired instructions experiencing ITLB misses.

other_assists.avx_store
Number of GSSE memory assist for stores. GSSE microcode assist is
being invoked whenever the hardware is unable to properly handle
GSSE-256b operations.

other_assists.avx_to_sse
Number of transitions from AVX-256 to legacy SSE when penalty
applicable.

other_assists.sse_to_avx
Number of transitions from SSE to AVX-256 when penalty applicable.

fp_assist.x87_output
Number of X87 assists due to output value.

fp_assist.x87_input
Number of X87 assists due to input value.

fp_assist.simd_output
Number of SIMD FP assists due to Output values.

fp_assist.simd_input
Number of SIMD FP assists due to input values.

mem_uops_retired.stlb_miss_loads
Retired load uops that miss the STLB.

mem_uops_retired.stlb_miss_stores
Retired store uops that miss the STLB.

mem_uops_retired.lock_loads
Retired load uops with locked access.

mem_uops_retired.split_loads
This event counts line-splitted load uops retired to the
architected path. A line split is across 64B cache-line which
includes a page split (4K).

mem_uops_retired.split_stores
This event counts line-splitted store uops retired to the
architected path. A line split is across 64B cache-line which
includes a page split (4K).

mem_uops_retired.all_loads
This event counts the number of load uops retired

mem_uops_retired.all_stores
This event counts the number of store uops retired.

mem_load_uops_retired.l1_hit
Retired load uops with L1 cache hits as data sources.

mem_load_uops_retired.l2_hit
Retired load uops with L2 cache hits as data sources.

mem_load_uops_retired.llc_hit
This event counts retired load uops that hit in the last-level (L3)
cache without snoops required.

mem_load_uops_retired.llc_miss
Miss in last-level (L3) cache. Excludes Unknown data-source.

mem_load_uops_retired.hit_lfb
Retired load uops which data sources were load uops missed L1 but
hit FB due to preceding miss to the same cache line with data not
ready.

mem_load_uops_llc_hit_retired.xsnp_miss
Retired load uops which data sources were LLC hit and cross-core
snoop missed in on-pkg core cache.

mem_load_uops_llc_hit_retired.xsnp_hit
This event counts retired load uops that hit in the last-level
cache (L3) and were found in a non-modified state in a neighboring
core's private cache (same package). Since the last level cache is
inclusive, hits to the L3 may require snooping the private L2
caches of any cores on the same socket that have the line. In this
case, a snoop was required, and another L2 had the line in a non-
modified state.

mem_load_uops_llc_hit_retired.xsnp_hitm
This event counts retired load uops that hit in the last-level
cache (L3) and were found in a non-modified state in a neighboring
core's private cache (same package). Since the last level cache is
inclusive, hits to the L3 may require snooping the private L2
caches of any cores on the same socket that have the line. In this
case, a snoop was required, and another L2 had the line in a
modified state, so the line had to be invalidated in that L2 cache
and transferred to the requesting L2.

mem_load_uops_llc_hit_retired.xsnp_none
Retired load uops which data sources were hits in LLC without
snoops required.

mem_load_uops_llc_miss_retired.local_dram
Data from local DRAM either Snoop not needed or Snoop Miss (RspI)

mem_load_uops_llc_miss_retired.remote_dram
Data from remote DRAM either Snoop not needed or Snoop Miss (RspI)

arith.fpu_div_active
Cycles when divider is busy executing divide operations.

arith.fpu_div
This event counts the number of the divide operations executed.

fp_comp_ops_exe.x87
Number of FP Computational Uops Executed this cycle. The number of
FADD, FSUB, FCOM, FMULs, integer MULsand IMULs, FDIVs, FPREMs,
FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an
FADD used in the middle of a transcendental flow from a s.

fp_comp_ops_exe.sse_packed_double
Number of SSE* or AVX-128 FP Computational packed double-precision
uops issued this cycle.

fp_comp_ops_exe.sse_scalar_single
Number of SSE* or AVX-128 FP Computational scalar single-precision
uops issued this cycle.

fp_comp_ops_exe.sse_packed_single
Number of SSE* or AVX-128 FP Computational packed single-precision
uops issued this cycle.

fp_comp_ops_exe.sse_scalar_double
Number of SSE* or AVX-128 FP Computational scalar double-precision
uops issued this cycle.

simd_fp_256.packed_single
Number of GSSE-256 Computational FP single precision uops issued
this cycle.

simd_fp_256.packed_double
Number of AVX-256 Computational FP double precision uops issued
this cycle.

uops_dispatched.thread
Uops dispatched per thread.

uops_dispatched.core
Uops dispatched from any thread.

uops_dispatched_port.port_0
Cycles per thread when uops are dispatched to port 0.

uops_dispatched_port.port_1
Cycles per thread when uops are dispatched to port 1.

uops_dispatched_port.port_4
Cycles per thread when uops are dispatched to port 4.

uops_dispatched_port.port_5
Cycles per thread when uops are dispatched to port 5.

cycle_activity.cycles_no_dispatch
Each cycle there was no dispatch for this thread, increment by 1.
Note this is connect to Umask 2. No dispatch can be deduced from
the UOPS_EXECUTED event.

cycle_activity.cycles_l1d_pending
Each cycle there was a miss-pending demand load this thread,
increment by 1. Note this is in DCU and connected to Umask 1. Miss
Pending demand load should be deduced by OR-ing increment bits of
DCACHE_MISS_PEND.PENDING.

cycle_activity.cycles_l2_pending
Each cycle there was a MLC-miss pending demand load this thread
(i.e. Non-completed valid SQ entry allocated for demand load and
waiting for Uncore), increment by 1. Note this is in MLC and
connected to Umask 0.

cycle_activity.stalls_l1d_pending
Each cycle there was a miss-pending demand load this thread and no
uops dispatched, increment by 1. Note this is in DCU and connected
to Umask 1 and 2. Miss Pending demand load should be deduced by OR-
ing increment bits of DCACHE_MISS_PEND.PENDING.

cycle_activity.stalls_l2_pending
Each cycle there was a MLC-miss pending demand load and no uops
dispatched on this thread (i.e. Non-completed valid SQ entry
allocated for demand load and waiting for Uncore), increment by 1.
Note this is in MLC and connected to Umask 0 and 2.

ept.walk_cycles
Cycle count for an Extended Page table walk. The Extended Page
Directory cache is used by Virtual Machine operating systems while
the guest operating systems use the standard TLB caches.

itlb_misses.miss_causes_a_walk
Misses at all ITLB levels that cause page walks.

itlb_misses.walk_completed
Misses in all ITLB levels that cause completed page walks.

itlb_misses.walk_duration
This event count cycles when Page Miss Handler (PMH) is servicing
page walks caused by ITLB misses.

itlb_misses.stlb_hit
Operations that miss the first ITLB level but hit the second and do
not cause any page walks.

dtlb_load_misses.miss_causes_a_walk
Load misses in all DTLB levels that cause page walks.

dtlb_load_misses.walk_completed
Load misses at all DTLB levels that cause completed page walks.

dtlb_load_misses.walk_duration
This event counts cycles when the page miss handler (PMH) is
servicing page walks caused by DTLB load misses.

dtlb_load_misses.stlb_hit
This event counts load operations that miss the first DTLB level
but hit the second and do not cause any page walks. The penalty in
this case is approximately 7 cycles.

dtlb_store_misses.miss_causes_a_walk
Store misses in all DTLB levels that cause page walks.

dtlb_store_misses.walk_completed
Store misses in all DTLB levels that cause completed page walks.

dtlb_store_misses.walk_duration
Cycles when PMH is busy with page walks.

dtlb_store_misses.stlb_hit
Store operations that miss the first TLB level but hit the second
and do not cause page walks.

tlb_flush.dtlb_thread
DTLB flush attempts of the thread-specific entries.

tlb_flush.stlb_any
STLB flush attempts.

l1d.replacement
This event counts L1D data line replacements. Replacements occur
when a new line is brought into the cache, causing eviction of a
line loaded earlier.

l1d.allocated_in_m
Allocated L1D data cache lines in M state.

l1d.eviction
L1D data cache lines in M state evicted due to replacement.

l1d.all_m_replacement
Cache lines in M state evicted out of L1D due to Snoop HitM or
dirty line replacement.

l1d_pend_miss.pending
L1D miss oustandings duration in cycles.

l1d_pend_miss.pending_cycles
Cycles with L1D load Misses outstanding.

load_hit_pre.sw_pf
Not software-prefetch load dispatches that hit FB allocated for
software prefetch.

load_hit_pre.hw_pf
Not software-prefetch load dispatches that hit FB allocated for
hardware prefetch.

hw_pre_req.dl1_miss
Hardware Prefetch requests that miss the L1D cache. This accounts
for both L1 streamer and IP-based (IPP) HW prefetchers. A request
is being counted each time it access the cache & miss it, including
if a block is applicable or if hit the Fill Buffer for .

lock_cycles.split_lock_uc_lock_duration
Cycles when L1 and L2 are locked due to UC or split lock.

lock_cycles.cache_lock_duration
Cycles when L1D is locked.

ld_blocks.data_unknown
Loads delayed due to SB blocks, preceding store operations with
known addresses but unknown data.

ld_blocks.store_forward
This event counts loads that followed a store to the same address,
where the data could not be forwarded inside the pipeline from the
store to the load. The most common reason why store forwarding
would be blocked is when a load's address range overlaps with a
preceeding smaller uncompleted store. See the table of not
supported store forwards in the Intel? 64 and IA-32 Architectures
Optimization Reference Manual. The penalty for blocked store
forwarding is that the load must wait for the store to complete
before it can be issued.

ld_blocks.no_sr
This event counts the number of times that split load operations
are temporarily blocked because all resources for handling the
split accesses are in use.

ld_blocks.all_block
Number of cases where any load ends up with a valid block-code
written to the load buffer (including blocks due to Memory Order
Buffer (MOB), Data Cache Unit (DCU), TLB, but load has no DCU
miss).

ld_blocks_partial.address_alias
Aliasing occurs when a load is issued after a store and their
memory addresses are offset by 4K. This event counts the number of
loads that aliased with a preceding store, resulting in an extended
address check in the pipeline. The enhanced address check
typically has a performance penalty of 5 cycles.

ld_blocks_partial.all_sta_block
This event counts the number of times that load operations are
temporarily blocked because of older stores, with addresses that
are not yet known. A load operation may incur more than one block
of this type.

misalign_mem_ref.loads
Speculative cache line split load uops dispatched to L1 cache.

misalign_mem_ref.stores
Speculative cache line split STA uops dispatched to L1 cache.

agu_bypass_cancel.count
This event counts executed load operations with all the following
traits: 1. addressing of the format [base + offset], 2. the offset
is between 1 and 2047, 3. the address specified in the base
register is in one page and the address [base+offset] is in an.

offcore_requests_outstanding.demand_data_rd
Offcore outstanding Demand Data Read transactions in uncore queue.

offcore_requests_outstanding.cycles_with_demand_data_rd
Cycles when offcore outstanding Demand Data Read transactions are
present in SuperQueue (SQ), queue to uncore.

offcore_requests_outstanding.demand_rfo
Offcore outstanding RFO store transactions in SuperQueue (SQ),
queue to uncore.

offcore_requests_outstanding.all_data_rd
Offcore outstanding cacheable Core Data Read transactions in
SuperQueue (SQ), queue to uncore.

offcore_requests_outstanding.cycles_with_data_rd
Cycles when offcore outstanding cacheable Core Data Read
transactions are present in SuperQueue (SQ), queue to uncore.

offcore_requests.demand_data_rd
Demand Data Read requests sent to uncore.

offcore_requests.demand_code_rd
Cacheable and noncachaeble code read requests.

offcore_requests.demand_rfo
Demand RFO requests including regular RFOs, locks, ItoM.

offcore_requests.all_data_rd
Demand and prefetch data reads.

offcore_requests_buffer.sq_full
Cases when offcore requests buffer cannot take more entries for
core.

l2_rqsts.demand_data_rd_hit
Demand Data Read requests that hit L2 cache.

l2_rqsts.rfo_hit
RFO requests that hit L2 cache.

l2_rqsts.rfo_miss
RFO requests that miss L2 cache.

l2_rqsts.code_rd_hit
L2 cache hits when fetching instructions, code reads.

l2_rqsts.code_rd_miss
L2 cache misses when fetching instructions.

l2_rqsts.pf_hit
Requests from the L2 hardware prefetchers that hit L2 cache.

l2_rqsts.pf_miss
Requests from the L2 hardware prefetchers that miss L2 cache.

l2_store_lock_rqsts.miss
RFOs that miss cache lines.

l2_store_lock_rqsts.hit_e
RFOs that hit cache lines in E state.

l2_store_lock_rqsts.hit_m
RFOs that hit cache lines in M state.

l2_store_lock_rqsts.all
RFOs that access cache lines in any state.

l2_l1d_wb_rqsts.miss
Count the number of modified Lines evicted from L1 and missed L2.
(Non-rejected WBs from the DCU.).

l2_l1d_wb_rqsts.hit_s
Not rejected writebacks from L1D to L2 cache lines in S state.

l2_l1d_wb_rqsts.hit_e
Not rejected writebacks from L1D to L2 cache lines in E state.

l2_l1d_wb_rqsts.hit_m
Not rejected writebacks from L1D to L2 cache lines in M state.

l2_l1d_wb_rqsts.all
Not rejected writebacks from L1D to L2 cache lines in any state.

l2_trans.demand_data_rd
Demand Data Read requests that access L2 cache.

l2_trans.rfo
RFO requests that access L2 cache.

l2_trans.code_rd
L2 cache accesses when fetching instructions.

l2_trans.all_pf
L2 or LLC HW prefetches that access L2 cache.

l2_trans.l1d_wb
L1D writebacks that access L2 cache.

l2_trans.l2_fill
L2 fill requests that access L2 cache.

l2_trans.l2_wb
L2 writebacks that access L2 cache.

l2_trans.all_requests
Transactions accessing L2 pipe.

l2_lines_in.i
L2 cache lines in I state filling L2.

l2_lines_in.s
L2 cache lines in S state filling L2.

l2_lines_in.e
L2 cache lines in E state filling L2.

l2_lines_in.all
This event counts the number of L2 cache lines brought into the L2
cache. Lines are filled into the L2 cache when there was an L2
miss.

l2_lines_out.demand_clean
Clean L2 cache lines evicted by demand.

l2_lines_out.demand_dirty
Dirty L2 cache lines evicted by demand.

l2_lines_out.pf_clean
Clean L2 cache lines evicted by L2 prefetch.

l2_lines_out.pf_dirty
Dirty L2 cache lines evicted by L2 prefetch.

l2_lines_out.dirty_all
Dirty L2 cache lines filling the L2.

longest_lat_cache.miss
Core-originated cacheable demand requests missed LLC.

longest_lat_cache.reference
Core-originated cacheable demand requests that refer to LLC.

sq_misc.split_lock
Split locks in SQ.

cpu_clk_thread_unhalted.ref_xclk
Reference cycles when the thread is unhalted (counts at 100 MHz
rate).

cpu_clk_thread_unhalted.one_thread_active
Count XClk pulses when this thread is unhalted and the other is
halted.

uops_dispatched_port.port_0_core
Cycles per core when uops are dispatched to port 0.

uops_dispatched_port.port_1_core
Cycles per core when uops are dispatched to port 1.

uops_dispatched_port.port_4_core
Cycles per core when uops are dispatched to port 4.

uops_dispatched_port.port_5_core
Cycles per core when uops are dispatched to port 5.

idq.mite_cycles
Cycles when uops are being delivered to Instruction Decode Queue
(IDQ) from MITE path.

idq.dsb_cycles
Cycles when uops are being delivered to Instruction Decode Queue
(IDQ) from Decode Stream Buffer (DSB) path.

uops_dispatched_port.port_2
Cycles per thread when load or STA uops are dispatched to port 2.

uops_dispatched_port.port_3
Cycles per thread when load or STA uops are dispatched to port 3.

idq.ms_dsb_cycles
Cycles when uops initiated by Decode Stream Buffer (DSB) are being
delivered to Instruction Decode Queue (IDQ) while Microcode
Sequenser (MS) is busy.

idq.ms_dsb_occur
Deliveries to Instruction Decode Queue (IDQ) initiated by Decode
Stream Buffer (DSB) while Microcode Sequenser (MS) is busy.

uops_dispatched_port.port_2_core
Cycles per core when load or STA uops are dispatched to port 2.

uops_dispatched_port.port_3_core
Cycles per core when load or STA uops are dispatched to port 3.

l2_rqsts.all_demand_data_rd
Demand Data Read requests.

l2_rqsts.all_rfo
RFO requests to L2 cache.

l2_rqsts.all_code_rd
L2 code requests.

l2_rqsts.all_pf
Requests from L2 hardware prefetchers.

l1d_blocks.bank_conflict_cycles
Cycles when dispatched loads are cancelled due to L1D bank
conflicts with other load ports.

resource_stalls2.all_prf_control
Resource stalls2 control structures full for physical registers.

idq_uops_not_delivered.cycles_le_2_uop_deliv.core
Cycles with less than 2 uops delivered by the front end.

idq_uops_not_delivered.cycles_le_3_uop_deliv.core
Cycles with less than 3 uops delivered by the front end.

resource_stalls2.all_fl_empty
Cycles with either free list is empty.

resource_stalls.mem_rs
Resource stalls due to memory buffers or Reservation Station (RS)
being fully utilized.

resource_stalls.ooo_rsrc
Resource stalls due to Rob being full, FCSW, MXCSR and OTHER.

resource_stalls2.ooo_rsrc
Resource stalls out of order resources full.

resource_stalls.lb_sb
Resource stalls due to load or store buffers all being in use.

int_misc.recovery_cycles
Number of cycles waiting for the checkpoints in Resource Allocation
Table (RAT) to be recovered after Nuke due to all other cases
except JEClear (e.g. whenever a ucode assist is needed like SSE
exception, memory disambiguation, etc...).

partial_rat_stalls.flags_merge_uop_cycles
This event counts the number of cycles spent executing performance-
sensitive flags-merging uops. For example, shift CL
(merge_arith_flags). For more details, See the Intel? 64 and IA-32
Architectures Optimization Reference Manual.

idq_uops_not_delivered.cycles_ge_1_uop_deliv.core
Cycles when 1 or more uops were delivered to the by the front end.

int_misc.recovery_stalls_count
Number of occurences waiting for the checkpoints in Resource
Allocation Table (RAT) to be recovered after Nuke due to all other
cases except JEClear (e.g. whenever a ucode assist is needed like
SSE exception, memory disambiguation, etc...).

idq.all_dsb_cycles_4_uops
Cycles Decode Stream Buffer (DSB) is delivering 4 Uops.

idq.all_dsb_cycles_any_uops
Cycles Decode Stream Buffer (DSB) is delivering any Uop.

idq.all_mite_cycles_4_uops
Cycles MITE is delivering 4 Uops.

idq.all_mite_cycles_any_uops
Cycles MITE is delivering any Uop.

dsb_fill.all_cancel
Cases of cancelling valid Decode Stream Buffer (DSB) fill not
because of exceeding way limit.

fp_assist.any
Cycles with any input/output SSE or FP assist.

baclears.any
Counts the total number when the front end is resteered, mainly
when the BPU cannot provide a correct prediction and this is
corrected by other branch handling mechanisms at the front end.

offcore_requests_outstanding.cycles_with_demand_rfo
Offcore outstanding demand rfo reads transactions in SuperQueue
(SQ), queue to uncore, every cycle.

idq_uops_not_delivered.cycles_fe_was_ok
Counts cycles FE delivered 4 uops or Resource Allocation Table
(RAT) was stalling FE.

br_inst_exec.all_branches
Speculative and retired branches.

br_misp_exec.all_branches
Speculative and retired mispredicted macro conditional branches.

idq.mite_all_uops
Uops delivered to Instruction Decode Queue (IDQ) from MITE path.

uops_retired.core_stall_cycles
Cycles without actually retired uops.

lsd.cycles_4_uops
Cycles 4 Uops delivered by the LSD, but didn't come from the
decoder.

machine_clears.count
Number of machine clears (nukes) of any type.

rs_events.empty_end
Counts end of periods where the Reservation Station (RS) was empty.
Could be useful to precisely locate Frontend Latency Bound issues.

idq.ms_switches
Number of switches from DSB (Decode Stream Buffer) or MITE (legacy
decode pipeline) to the Microcode Sequencer.

cpu_clk_unhalted.thread_p_any
Core cycles when at least one thread on the physical core is not in
halt state.

cpu_clk_thread_unhalted.ref_xclk_any
Reference cycles when the at least one thread on the physical core
is unhalted (counts at 100 MHz rate).

int_misc.recovery_cycles_any
Core cycles the allocator was stalled due to recovery from earlier
clear event for any thread running on the physical core (e.g.
misprediction or memory nuke).

offcore_requests_outstanding.demand_data_rd_c6
Cycles with at least 6 offcore outstanding Demand Data Read
transactions in uncore queue.

uops_executed.core_cycles_ge_1
Cycles at least 1 micro-op is executed from any thread on physical
core.

uops_executed.core_cycles_ge_2
Cycles at least 2 micro-op is executed from any thread on physical
core.

uops_executed.core_cycles_ge_3
Cycles at least 3 micro-op is executed from any thread on physical
core.

uops_executed.core_cycles_ge_4
Cycles at least 4 micro-op is executed from any thread on physical
core.

uops_executed.core_cycles_none
Cycles with no micro-ops executed from any thread on physical core.

l1d_pend_miss.pending_cycles_any
Cycles with L1D load Misses outstanding from any thread on physical
core.

l1d_pend_miss.fb_full
Cycles a demand request was blocked due to Fill Buffers
inavailability.

cpu_clk_unhalted.ref_xclk
Reference cycles when the thread is unhalted (counts at 100 MHz
rate)

cpu_clk_unhalted.ref_xclk_any
Reference cycles when the at least one thread on the physical core
is unhalted (counts at 100 MHz rate).

cpu_clk_unhalted.one_thread_active
Count XClk pulses when this thread is unhalted and the other thread
is halted.

SEE ALSO


cpc(3CPC)

https://download.01.org/perfmon/index/

illumos June 18, 2018 illumos