| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the Maximum Queue Entries Supported (MQES) to initialize I/O queues
depth rather than picking a fixed number (256) which might not be
supported by some NVMe controllers (the NVMe specification says that an
NVMe controller may support any number between 2 to 4096).
Still cap the I/O queues depth to 256 since, during my testing, SeaBIOS
was running out of memory when using something higher than 256 (4096 on
the NVMe controller that I've had a chance to try).
Signed-off-by: Filippo Sironi <sironi@amazon.de>
|
|
|
|
|
|
|
|
|
|
| |
If the allocation of I/O queues ran out of memory, the code would fail to detect
that and happily use these queues at address zero. For me this happens for
systems with more than 7 NVMe controllers.
Fix the out of memory handling to gracefully handle this case.
Signed-off-by: Julian Stecklina <jsteckli@amazon.de>
|
|
|
|
|
|
|
|
|
|
| |
Now that the drive_s struct does not need to be in the f-segment,
rename references to drive_gf in the generic drive code to drive_fl.
This is just variable renames - no code changes.
Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
|
|
|
|
|
|
|
|
|
|
| |
NVMe support was tested on purism/librem13 laptops and SeaBIOS has
no problems in detecting and booting the drives.
This is a continuation of commit 235a8190 which was incomplete.
Signed-off-by: Youness Alaoui <youness.alaoui@puri.sm>
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
|
|
|
|
|
|
|
| |
A couple of users have reported success with the NVMe driver on real
hardware, so allow it to be enabled outside of QEMU.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
|
|
|
|
|
| |
Signed-off-by: Daniel Verkamp <daniel@drv.nu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
|
|
|
|
|
|
|
|
| |
The status code field is 8 bits wide starting at bit 1; the previous
code would truncate the top bit.
Signed-off-by: Daniel Verkamp <daniel@drv.nu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
|
|
|
|
|
|
|
|
| |
It looks like the intent was to exit the loop if a command failed, but
the current code would actually continue looping in that case.
Signed-off-by: Daniel Verkamp <daniel@drv.nu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
|
|
|
|
|
|
|
| |
500 ms is not sufficient for the admin commands used during
initialization on some real hardware.
Signed-off-by: Daniel Verkamp <daniel@drv.nu>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than using the Identify command with CNS 01b (GET_NS_LIST), which
was added in NVMe 1.1, we can just enumerate all of the possible
namespace IDs.
The relevant part of the NVMe spec reads:
Namespaces shall be allocated in order (starting with 1) and packed
sequentially.
Since the previously-used GET_NS_LIST only returns active namespaces, we
also need a check in nvme_probe_ns() to ensure that inactive namespaces
are not reported as boot devices. This can be accomplished by checking
for non-zero block count - the spec indicates that Identify Namespace
for an inactive namespace ID will return all zeroes.
This should have no impact on the QEMU NVMe device model, since it
always reports exactly one namespace (NSID 1).
Signed-off-by: Daniel Verkamp <daniel@drv.nu>
|
|
This patch enables SeaBIOS to boot from NVMe. Finding namespaces and
basic I/O works. Testing has been done in qemu and so far it works with
Grub, syslinux, and the FreeBSD loader. You need a recent Qemu (>=
2.7.0), because older versions have buggy NVMe support.
The NVMe code is currently only enabled on Qemu due to lack of testing
on real hardware.
Signed-off-by: Julian Stecklina <jsteckli@amazon.de>
|