Skip to content

Commit

Permalink
refactor: drop Python 3.5 support
Browse files Browse the repository at this point in the history
Signed-off-by: Xuehai Pan <[email protected]>
  • Loading branch information
XuehaiPan committed Dec 4, 2022
1 parent afe3321 commit 330b2f8
Show file tree
Hide file tree
Showing 34 changed files with 216 additions and 215 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

- Operating system and version: [e.g. Ubuntu 20.04 LTS / Windows 10 Build 19043.1110]
- Terminal emulator and version: [e.g. GNOME Terminal 3.36.2 / Windows Terminal 1.8.1521.0]
- Python version: [e.g. `3.5.6` / `3.9.6`]
- Python version: [e.g. `3.6.6` / `3.9.6`]
- NVML version (driver version): [e.g. `460.84`]
- `nvitop` version or commit: [e.g. `0.10.0` / `0.10.1.dev7+ga083321` / `main@75ae3c`]
- `python-ml-py` version: [e.g. `11.450.51`]
Expand Down
14 changes: 7 additions & 7 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,19 +49,19 @@ jobs:
id: py
uses: actions/setup-python@v4
with:
python-version: "3.5 - 3.11"
python-version: "3.6 - 3.11"
update-environment: true

- name: Set up Python 3.5
id: py35
- name: Set up Python 3.6
id: py36
uses: actions/setup-python@v4
with:
python-version: "3.5"
python-version: "3.6"
update-environment: false

- name: Check syntax (Python 3.5)
- name: Check syntax (Python 3.6)
run: |
"${{ steps.py35.outputs.python-path }}" -m compileall nvitop
"${{ steps.py36.outputs.python-path }}" -m compileall nvitop
- name: Upgrade build dependencies
run: python -m pip install --upgrade pip setuptools wheel build
Expand Down Expand Up @@ -118,7 +118,7 @@ jobs:
uses: actions/setup-python@v4
if: startsWith(github.ref, 'refs/tags/')
with:
python-version: "3.5 - 3.11"
python-version: "3.6 - 3.11"
update-environment: true

- name: Set __release__
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,19 @@ jobs:
id: py
uses: actions/setup-python@v4
with:
python-version: "3.5 - 3.11"
python-version: "3.6 - 3.11"
update-environment: true

- name: Set up Python 3.5
id: py35
- name: Set up Python 3.6
id: py36
uses: actions/setup-python@v4
with:
python-version: "3.5"
python-version: "3.6"
update-environment: false

- name: Check syntax (Python 3.5)
- name: Check syntax (Python 3.6)
run: |
"${{ steps.py35.outputs.python-path }}" -m compileall nvitop
"${{ steps.py36.outputs.python-path }}" -m compileall nvitop
- name: Upgrade pip
run: |
Expand Down
7 changes: 6 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,12 @@ repos:
rev: 22.10.0
hooks:
- id: black
args: [--safe]
stages: [commit, push, manual]
- repo: https://github.com/asottile/pyupgrade
rev: v3.3.0
hooks:
- id: pyupgrade
args: [--py36-plus]
stages: [commit, push, manual]
- repo: local
hooks:
Expand Down
2 changes: 1 addition & 1 deletion .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ persistent=yes

# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.5
py-version=3.6

# Discover python modules and packages in the file system subtree.
recursive=no
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<!-- markdownlint-disable html -->

![Python 3.5+](https://img.shields.io/badge/Python-3.5%2B-brightgreen)
![Python 3.6+](https://img.shields.io/badge/Python-3.6%2B-brightgreen)
[![PyPI](https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi)](https://pypi.org/project/nvitop)
![Status](https://img.shields.io/pypi/status/nvitop?label=status)
[![conda-forge](https://img.shields.io/conda/vn/conda-forge/nvitop?label=conda&logo=condaforge)](https://anaconda.org/conda-forge/nvitop)
Expand Down Expand Up @@ -107,7 +107,7 @@ If this repo is useful to you, please star ⭐️ it to let more people know

## Requirements

- Python 3.5+ (with `pip>=10.0`)
- Python 3.6+ (with `pip>=10.0`)
- NVIDIA Management Library (NVML)
- nvidia-ml-py
- psutil
Expand Down Expand Up @@ -297,7 +297,7 @@ ssh user@host -t '~/.local/bin/nvitop' # installed by `pip3 install --user ...`
Type `nvitop --help` for more command options:

```text
usage: nvitop [--help] [--version] [--once] [--monitor [{auto,full,compact}]]
usage: nvitop [--help] [--version] [--once | --monitor [{auto,full,compact}]]
[--interval SEC] [--ascii] [--colorful] [--force-color] [--light]
[--gpu-util-thresh th1 th2] [--mem-util-thresh th1 th2]
[--only idx [idx ...]] [--only-visible]
Expand All @@ -306,7 +306,7 @@ usage: nvitop [--help] [--version] [--once] [--monitor [{auto,full,compact}]]
An interactive NVIDIA-GPU process viewer.
optional arguments:
options:
--help, -h Show this help message and exit.
--version, -V Show nvitop's version number and exit.
--once, -1 Report query data only once.
Expand Down Expand Up @@ -517,7 +517,7 @@ usage: nvisel [--help] [--version]
CUDA visible devices selection tool.
optional arguments:
options:
--help, -h Show this help message and exit.
--version, -V Show nvisel's version number and exit.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ An interactive NVIDIA-GPU process viewer, the one-stop solution for GPU process
.. |GitHub| image:: https://img.shields.io/badge/GitHub-Homepage-blue?logo=github
.. _GitHub: https://github.com/XuehaiPan/nvitop

.. |Python Version| image:: https://img.shields.io/badge/Python-3.5%2B-brightgreen
.. |Python Version| image:: https://img.shields.io/badge/Python-3.6%2B-brightgreen
.. _Python Version: https://pypi.org/project/nvitop

.. |PyPI Package| image:: https://img.shields.io/pypi/v/nvitop?label=pypi&logo=pypi
Expand Down Expand Up @@ -62,7 +62,7 @@ Install from PyPI (|PyPI Package|_ / |Package Status|_):
.. note::

Python 3.5+ is required, and Python versions lower than 3.5 is not supported.
Python 3.6+ is required, and Python versions lower than 3.6 is not supported.

Install from conda-forge (|Conda-forge Package|_):

Expand Down
3 changes: 3 additions & 0 deletions docs/source/spelling_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -128,3 +128,6 @@ bstate
getmouse
uncase
lol
xx
yyy
zz
4 changes: 1 addition & 3 deletions nvitop/callbacks/tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,4 @@
def add_scalar_dict(writer, main_tag, tag_scalar_dict, global_step=None, walltime=None):
"""Batched version of `writer.add_scalar`"""
for tag, scalar in tag_scalar_dict.items():
writer.add_scalar(
'{}/{}'.format(main_tag, tag), scalar, global_step=global_step, walltime=walltime
)
writer.add_scalar(f'{main_tag}/{tag}', scalar, global_step=global_step, walltime=walltime)
16 changes: 8 additions & 8 deletions nvitop/callbacks/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,22 +47,22 @@ def get_gpu_stats(

stats = {}
for device in devices:
prefix = 'gpu_id: {}'.format(device.cuda_index)
prefix = f'gpu_id: {device.cuda_index}'
if device.cuda_index != device.physical_index:
prefix += ' (physical index: {})'.format(device.physical_index)
prefix += f' (physical index: {device.physical_index})'
with device.oneshot():
if memory_utilization or gpu_utilization:
utilization = device.utilization_rates()
if memory_utilization:
stats['{}/utilization.memory (%)'.format(prefix)] = float(utilization.memory)
stats[f'{prefix}/utilization.memory (%)'] = float(utilization.memory)
if gpu_utilization:
stats['{}/utilization.gpu (%)'.format(prefix)] = float(utilization.gpu)
stats[f'{prefix}/utilization.gpu (%)'] = float(utilization.gpu)
if memory_utilization:
stats['{}/memory.used (MiB)'.format(prefix)] = float(device.memory_used()) / MiB
stats['{}/memory.free (MiB)'.format(prefix)] = float(device.memory_free()) / MiB
stats[f'{prefix}/memory.used (MiB)'] = float(device.memory_used()) / MiB
stats[f'{prefix}/memory.free (MiB)'] = float(device.memory_free()) / MiB
if fan_speed:
stats['{}/fan.speed (%)'.format(prefix)] = float(device.fan_speed())
stats[f'{prefix}/fan.speed (%)'] = float(device.fan_speed())
if temperature:
stats['{}/temperature.gpu (C)'.format(prefix)] = float(device.fan_speed())
stats[f'{prefix}/temperature.gpu (C)'] = float(device.fan_speed())

return stats
14 changes: 7 additions & 7 deletions nvitop/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def posint(argstring):
'-V',
dest='version',
action='version',
version='%(prog)s {}'.format(__version__),
version=f'%(prog)s {__version__}',
help="Show %(prog)s's version number and exit.",
)

Expand Down Expand Up @@ -301,14 +301,14 @@ def main(): # pylint: disable=too-many-branches,too-many-statements,too-many-lo
invalid_indices = indices.difference(range(device_count))
indices.intersection_update(range(device_count))
if len(invalid_indices) > 1:
messages.append('ERROR: Invalid device indices: {}.'.format(sorted(invalid_indices)))
messages.append(f'ERROR: Invalid device indices: {sorted(invalid_indices)}.')
elif len(invalid_indices) == 1:
messages.append('ERROR: Invalid device index: {}.'.format(list(invalid_indices)[0]))
messages.append(f'ERROR: Invalid device index: {list(invalid_indices)[0]}.')
elif args.only_visible:
indices = set(
indices = {
index if isinstance(index, int) else index[0]
for index in Device.parse_cuda_visible_devices()
)
}
else:
indices = range(device_count)
devices = Device.from_indices(sorted(set(indices)))
Expand Down Expand Up @@ -345,7 +345,7 @@ def main(): # pylint: disable=too-many-branches,too-many-statements,too-many-lo
except curses.error as ex:
if ui is not None:
raise
messages.append('ERROR: Failed to initialize `curses` ({})'.format(ex))
messages.append(f'ERROR: Failed to initialize `curses` ({ex})')

if ui is None:
ui = UI(devices, filters, ascii=args.ascii)
Expand All @@ -367,7 +367,7 @@ def main(): # pylint: disable=too-many-branches,too-many-statements,too-many-lo
else 'ERROR: A FunctionNotFound error occurred while calling:'
]
unknown_function_messages.extend(
' nvmlQuery({.__name__!r}, *args, **kwargs)'.format(func)
f' nvmlQuery({func.__name__!r}, *args, **kwargs)'
for func, _ in libnvml.UNKNOWN_FUNCTIONS.values()
)
unknown_function_messages.append(
Expand Down
40 changes: 20 additions & 20 deletions nvitop/core/collector.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@
__all__ = ['take_snapshots', 'collect_in_background', 'ResourceMetricCollector']


SnapshotResult = NamedTuple(
'SnapshotResult', [('devices', List[Snapshot]), ('gpu_processes', List[Snapshot])]
)
class SnapshotResult(NamedTuple): # pylint: disable=missing-class-docstring
devices: List[Snapshot]
gpu_processes: List[Snapshot]


timer = time.monotonic
Expand All @@ -48,7 +48,7 @@
def take_snapshots(
devices: Optional[Union[Device, Iterable[Device]]] = None,
*,
gpu_processes: Optional[Union[bool, GpuProcess, Iterable[GpuProcess]]] = None
gpu_processes: Optional[Union[bool, GpuProcess, Iterable[GpuProcess]]] = None,
) -> SnapshotResult:
"""Retrieves status of demanded devices and GPU processes.
Expand Down Expand Up @@ -187,6 +187,7 @@ def collect_in_background(
on_collect: Callable[[Dict[str, float]], bool],
collector: Optional['ResourceMetricCollector'] = None,
interval: Optional[float] = None,
*,
on_start: Optional[Callable[['ResourceMetricCollector'], None]] = None,
on_stop: Optional[Callable[['ResourceMetricCollector'], None]] = None,
tag: str = 'metrics-daemon',
Expand Down Expand Up @@ -250,7 +251,7 @@ def on_stop(collector): # will be called only once at stop
elif interval is None:
interval = collector.interval
else:
raise ValueError('Invalid argument interval={!r}'.format(interval))
raise ValueError(f'Invalid argument interval={interval!r}')
interval = min(interval, collector.interval)

def target():
Expand Down Expand Up @@ -400,7 +401,7 @@ def __init__(
if isinstance(interval, (int, float)) and interval > 0:
interval = float(interval)
else:
raise ValueError('Invalid argument interval={!r}'.format(interval))
raise ValueError(f'Invalid argument interval={interval!r}')

if devices is None:
devices = Device.all()
Expand Down Expand Up @@ -463,9 +464,7 @@ def activate(self, tag: str) -> 'ResourceMetricCollector':
self._metric_buffer = _MetricBuffer(tag, self, prev=self._metric_buffer)
self._last_timestamp = timer() - 2.0 * self.interval
else:
raise RuntimeError(
'Resource metric collector is already started with tag "{}"'.format(tag)
)
raise RuntimeError(f'Resource metric collector is already started with tag "{tag}"')

self._daemon_running.set()
try:
Expand Down Expand Up @@ -493,7 +492,7 @@ def deactivate(self, tag: Optional[str] = None) -> 'ResourceMetricCollector':
tag = self._metric_buffer.tag
elif tag not in self._tags:
raise RuntimeError(
'Resource metric collector has not been started with tag "{}".'.format(tag)
f'Resource metric collector has not been started with tag "{tag}".'
)

buffer = self._metric_buffer
Expand Down Expand Up @@ -576,7 +575,7 @@ def clear(self, tag: Optional[str] = None) -> None:
tag = self._metric_buffer.tag
elif tag not in self._tags:
raise RuntimeError(
'Resource metric collector has not been started with tag "{}".'.format(tag)
f'Resource metric collector has not been started with tag "{tag}".'
)

buffer = self._metric_buffer
Expand All @@ -602,6 +601,7 @@ def daemonize(
self,
on_collect: Callable[[Dict[str, float]], bool],
interval: Optional[float] = None,
*,
on_start: Optional[Callable[['ResourceMetricCollector'], None]] = None,
on_stop: Optional[Callable[['ResourceMetricCollector'], None]] = None,
tag: str = 'metrics-daemon',
Expand Down Expand Up @@ -728,23 +728,23 @@ def take_snapshots(self) -> SnapshotResult:

device_identifiers = {}
for device in devices:
identifier = 'gpu:{}'.format(device.index)
identifier = f'gpu:{device.index}'
if isinstance(device.real, CudaDevice):
identifier = 'cuda:{} ({})'.format(device.cuda_index, identifier)
identifier = f'cuda:{device.cuda_index} ({identifier})'
device_identifiers[device.real] = identifier

for attr, name, unit in self.DEVICE_METRICS:
value = float(getattr(device, attr)) / unit
metrics['{}/{}'.format(identifier, name)] = value
metrics[f'{identifier}/{name}'] = value

for process in gpu_processes:
device_identifier = device_identifiers[process.device]
identifier = 'pid:{}'.format(process.pid)
identifier = f'pid:{process.pid}'

for attr, scope, name, unit in self.PROCESS_METRICS:
scope = scope or device_identifier
value = float(getattr(process, attr)) / unit
metrics['{}/{}/{}'.format(identifier, scope, name)] = value
metrics[f'{identifier}/{scope}/{name}'] = value

with self._lock:
if self._metric_buffer is not None:
Expand All @@ -769,7 +769,7 @@ def __init__(

self.tag = tag
if self.prev is not None:
self.key_prefix = '{}/{}'.format(self.prev.key_prefix, self.tag)
self.key_prefix = f'{self.prev.key_prefix}/{self.tag}'
else:
self.key_prefix = self.tag

Expand Down Expand Up @@ -799,7 +799,7 @@ def clear(self) -> None:

def collect(self) -> Dict[str, float]:
metrics = {
'{}/{}/{}'.format(self.key_prefix, key, name): value
f'{self.key_prefix}/{key}/{name}': value
for key, stats in self.buffer.items()
for name, value in stats.items()
}
Expand All @@ -811,8 +811,8 @@ def collect(self) -> Dict[str, float]:
'host/running_time (min)/min'
):
del metrics[key]
metrics['{}/duration (s)'.format(self.key_prefix)] = timer() - self.start_timestamp
metrics['{}/timestamp'.format(self.key_prefix)] = time.time()
metrics[f'{self.key_prefix}/duration (s)'] = timer() - self.start_timestamp
metrics[f'{self.key_prefix}/timestamp'] = time.time()
return metrics

def __len__(self) -> int:
Expand Down
Loading

0 comments on commit 330b2f8

Please sign in to comment.