diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-25 13:20:41 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-25 13:20:41 -0700 |
commit | 7e4dc77b2869a683fc43c0394fca5441816390ba (patch) | |
tree | 62e734c599bc1da2712fdb63be996622c415a83a /tools/perf/util/evlist.h | |
parent | 89e7eb098adfe342bc036f00201eb579d448f033 (diff) | |
parent | 5048c2af078d5976895d521262a8802ea791f3b0 (diff) | |
download | linux-7e4dc77b2869a683fc43c0394fca5441816390ba.tar.gz |
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf updates from Ingo Molnar:
"With over 300 commits it's been a busy cycle - with most of the work
concentrated on the tooling side (as it should).
The main kernel side enhancements were:
- Add per event callchain limit: Recently we introduced a sysctl to
tune the max-stack for all events for which callchains were
requested:
$ sysctl kernel.perf_event_max_stack
kernel.perf_event_max_stack = 127
Now this patch introduces a way to configure this per event, i.e.
this becomes possible:
$ perf record -e sched:*/max-stack=2/ -e block:*/max-stack=10/ -a
allowing finer tuning of how much buffer space callchains use.
This uses an u16 from the reserved space at the end, leaving
another u16 for future use.
There has been interest in even finer tuning, namely to control the
max stack for kernel and userspace callchains separately. Further
discussion is needed, we may for instance use the remaining u16 for
that and when it is present, assume that the sample_max_stack
introduced in this patch applies for the kernel, and the u16 left
is used for limiting the userspace callchain (Arnaldo Carvalho de
Melo)
- Optimize AUX event (hardware assisted side-band event) delivery
(Kan Liang)
- Rework Intel family name macro usage (this is partially x86 arch
work) (Dave Hansen)
- Refine and fix Intel LBR support (David Carrillo-Cisneros)
- Add support for Intel 'TopDown' events (Andi Kleen)
- Intel uncore PMU driver fixes and enhancements (Kan Liang)
- ... other misc changes.
Here's an incomplete list of the tooling enhancements (but there's
much more, see the shortlog and the git log for details):
- Support cross unwinding, i.e. collecting '--call-graph dwarf'
perf.data files in one machine and then doing analysis in another
machine of a different hardware architecture. This enables, for
instance, to do:
$ perf record -a --call-graph dwarf
on a x86-32 or aarch64 system and then do 'perf report' on it on a
x86_64 workstation (He Kuang)
- Allow reading from a backward ring buffer (one setup via
sys_perf_event_open() with perf_event_attr.write_backward = 1)
(Wang Nan)
- Finish merging initial SDT (Statically Defined Traces) support, see
cset comments for details about how it all works (Masami Hiramatsu)
- Support attaching eBPF programs to tracepoints (Wang Nan)
- Add demangling of symbols in programs written in the Rust language
(David Tolnay)
- Add support for tracepoints in the python binding, including an
example, that sets up and parses sched:sched_switch events,
tools/perf/python/tracepoint.py (Jiri Olsa)
- Introduce --stdio-color to set up the color output mode selection
in 'annotate' and 'report', allowing emit color escape sequences
when redirecting the output of these tools (Arnaldo Carvalho de
Melo)
- Add 'callindent' option to 'perf script -F', to indent the Intel PT
call stack, making this output more ftrace-like (Adrian Hunter,
Andi Kleen)
- Allow dumping the object files generated by llvm when processing
eBPF scriptlet events (Wang Nan)
- Add stackcollapse.py script to help generating flame graphs (Paolo
Bonzini)
- Add --ldlat option to 'perf mem' to specify load latency for loads
event (e.g. cpu/mem-loads/ ) (Jiri Olsa)
- Tooling support for Intel TopDown counters, recently added to the
kernel (Andi Kleen)"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (303 commits)
perf tests: Add is_printable_array test
perf tools: Make is_printable_array global
perf script python: Fix string vs byte array resolving
perf probe: Warn unmatched function filter correctly
perf cpu_map: Add more helpers
perf stat: Balance opening and reading events
tools: Copy linux/{hash,poison}.h and check for drift
perf tools: Remove include/linux/list.h from perf's MANIFEST
tools: Copy the bitops files accessed from the kernel and check for drift
Remove: kernel unistd*h files from perf's MANIFEST, not used
perf tools: Remove tools/perf/util/include/linux/const.h
perf tools: Remove tools/perf/util/include/asm/byteorder.h
perf tools: Add missing linux/compiler.h include to perf-sys.h
perf jit: Remove some no-op error handling
perf jit: Add missing curly braces
objtool: Initialize variable to silence old compiler
objtool: Add -I$(srctree)/tools/arch/$(ARCH)/include/uapi
perf record: Add --tail-synthesize option
perf session: Don't warn about out of order event if write_backward is used
perf tools: Enable overwrite settings
...
Diffstat (limited to 'tools/perf/util/evlist.h')
-rw-r--r-- | tools/perf/util/evlist.h | 92 |
1 files changed, 69 insertions, 23 deletions
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index d740fb877ab6..4fd034f22d2f 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -35,6 +35,40 @@ struct perf_mmap { char event_copy[PERF_SAMPLE_MAX_SIZE] __attribute__((aligned(8))); }; +static inline size_t +perf_mmap__mmap_len(struct perf_mmap *map) +{ + return map->mask + 1 + page_size; +} + +/* + * State machine of bkw_mmap_state: + * + * .________________(forbid)_____________. + * | V + * NOTREADY --(0)--> RUNNING --(1)--> DATA_PENDING --(2)--> EMPTY + * ^ ^ | ^ | + * | |__(forbid)____/ |___(forbid)___/| + * | | + * \_________________(3)_______________/ + * + * NOTREADY : Backward ring buffers are not ready + * RUNNING : Backward ring buffers are recording + * DATA_PENDING : We are required to collect data from backward ring buffers + * EMPTY : We have collected data from backward ring buffers. + * + * (0): Setup backward ring buffer + * (1): Pause ring buffers for reading + * (2): Read from ring buffers + * (3): Resume ring buffers for recording + */ +enum bkw_mmap_state { + BKW_MMAP_NOTREADY, + BKW_MMAP_RUNNING, + BKW_MMAP_DATA_PENDING, + BKW_MMAP_EMPTY, +}; + struct perf_evlist { struct list_head entries; struct hlist_head heads[PERF_EVLIST__HLIST_SIZE]; @@ -44,17 +78,18 @@ struct perf_evlist { bool overwrite; bool enabled; bool has_user_cpus; - bool backward; size_t mmap_len; int id_pos; int is_pos; u64 combined_sample_type; + enum bkw_mmap_state bkw_mmap_state; struct { int cork_fd; pid_t pid; } workload; struct fdarray pollfd; struct perf_mmap *mmap; + struct perf_mmap *backward_mmap; struct thread_map *threads; struct cpu_map *cpus; struct perf_evsel *selected; @@ -129,16 +164,24 @@ struct perf_evsel *perf_evlist__id2evsel_strict(struct perf_evlist *evlist, struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id); +void perf_evlist__toggle_bkw_mmap(struct perf_evlist *evlist, enum bkw_mmap_state state); + +union perf_event *perf_mmap__read_forward(struct perf_mmap *map, bool check_messup); +union perf_event *perf_mmap__read_backward(struct perf_mmap *map); + +void perf_mmap__read_catchup(struct perf_mmap *md); +void perf_mmap__consume(struct perf_mmap *md, bool overwrite); + union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx); +union perf_event *perf_evlist__mmap_read_forward(struct perf_evlist *evlist, + int idx); union perf_event *perf_evlist__mmap_read_backward(struct perf_evlist *evlist, int idx); void perf_evlist__mmap_read_catchup(struct perf_evlist *evlist, int idx); void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx); -int perf_evlist__pause(struct perf_evlist *evlist); -int perf_evlist__resume(struct perf_evlist *evlist); int perf_evlist__open(struct perf_evlist *evlist); void perf_evlist__close(struct perf_evlist *evlist); @@ -249,70 +292,70 @@ void perf_evlist__to_front(struct perf_evlist *evlist, struct perf_evsel *move_evsel); /** - * __evlist__for_each - iterate thru all the evsels + * __evlist__for_each_entry - iterate thru all the evsels * @list: list_head instance to iterate * @evsel: struct evsel iterator */ -#define __evlist__for_each(list, evsel) \ +#define __evlist__for_each_entry(list, evsel) \ list_for_each_entry(evsel, list, node) /** - * evlist__for_each - iterate thru all the evsels + * evlist__for_each_entry - iterate thru all the evsels * @evlist: evlist instance to iterate * @evsel: struct evsel iterator */ -#define evlist__for_each(evlist, evsel) \ - __evlist__for_each(&(evlist)->entries, evsel) +#define evlist__for_each_entry(evlist, evsel) \ + __evlist__for_each_entry(&(evlist)->entries, evsel) /** - * __evlist__for_each_continue - continue iteration thru all the evsels + * __evlist__for_each_entry_continue - continue iteration thru all the evsels * @list: list_head instance to iterate * @evsel: struct evsel iterator */ -#define __evlist__for_each_continue(list, evsel) \ +#define __evlist__for_each_entry_continue(list, evsel) \ list_for_each_entry_continue(evsel, list, node) /** - * evlist__for_each_continue - continue iteration thru all the evsels + * evlist__for_each_entry_continue - continue iteration thru all the evsels * @evlist: evlist instance to iterate * @evsel: struct evsel iterator */ -#define evlist__for_each_continue(evlist, evsel) \ - __evlist__for_each_continue(&(evlist)->entries, evsel) +#define evlist__for_each_entry_continue(evlist, evsel) \ + __evlist__for_each_entry_continue(&(evlist)->entries, evsel) /** - * __evlist__for_each_reverse - iterate thru all the evsels in reverse order + * __evlist__for_each_entry_reverse - iterate thru all the evsels in reverse order * @list: list_head instance to iterate * @evsel: struct evsel iterator */ -#define __evlist__for_each_reverse(list, evsel) \ +#define __evlist__for_each_entry_reverse(list, evsel) \ list_for_each_entry_reverse(evsel, list, node) /** - * evlist__for_each_reverse - iterate thru all the evsels in reverse order + * evlist__for_each_entry_reverse - iterate thru all the evsels in reverse order * @evlist: evlist instance to iterate * @evsel: struct evsel iterator */ -#define evlist__for_each_reverse(evlist, evsel) \ - __evlist__for_each_reverse(&(evlist)->entries, evsel) +#define evlist__for_each_entry_reverse(evlist, evsel) \ + __evlist__for_each_entry_reverse(&(evlist)->entries, evsel) /** - * __evlist__for_each_safe - safely iterate thru all the evsels + * __evlist__for_each_entry_safe - safely iterate thru all the evsels * @list: list_head instance to iterate * @tmp: struct evsel temp iterator * @evsel: struct evsel iterator */ -#define __evlist__for_each_safe(list, tmp, evsel) \ +#define __evlist__for_each_entry_safe(list, tmp, evsel) \ list_for_each_entry_safe(evsel, tmp, list, node) /** - * evlist__for_each_safe - safely iterate thru all the evsels + * evlist__for_each_entry_safe - safely iterate thru all the evsels * @evlist: evlist instance to iterate * @evsel: struct evsel iterator * @tmp: struct evsel temp iterator */ -#define evlist__for_each_safe(evlist, tmp, evsel) \ - __evlist__for_each_safe(&(evlist)->entries, tmp, evsel) +#define evlist__for_each_entry_safe(evlist, tmp, evsel) \ + __evlist__for_each_entry_safe(&(evlist)->entries, tmp, evsel) void perf_evlist__set_tracking_event(struct perf_evlist *evlist, struct perf_evsel *tracking_evsel); @@ -321,4 +364,7 @@ void perf_event_attr__set_max_precise_ip(struct perf_event_attr *attr); struct perf_evsel * perf_evlist__find_evsel_by_str(struct perf_evlist *evlist, const char *str); + +struct perf_evsel *perf_evlist__event2evsel(struct perf_evlist *evlist, + union perf_event *event); #endif /* __PERF_EVLIST_H */ |