This topic is really intereasting because is not easy to answer.
Why ? well, there are different points and perspective to consider.
When the numa archtecture was planned and developed the computers was a little bit different from the actually systems.
Like your description, a Not Uniform Access to the memory could be consider a NUMA system ? Yes could be, and (for a small part) not.
For example multiple cpu's, could be shared the same memory bank without an hardware implementation and this is not a NUMA system.
But linux (for example) abstract the different memory region with the DIrect Memory Access, technology feature not available when the NUMA architecture was developed ( i hope that I'm not wrong), even in the case of mainboards with separate IO bus for non-uniform memories.
Another example are server's with different bus for different memory banks, where a SMP system could access at the prefereed memory slots or (with a small performance degradation) to the other memory banks.
Now, with the actually technology , where the cpu is faster like the memory, it's hard to distinguish a numa system without consider the hardware architecture inside.
So, I'm not sure what you're trying to determine with your question. I mean, the Linux NUMA tools will show you the system's topology, which is probably effectively doing uniform memory access, but what is your *underlying* question here?
Even if you only have one physical CPU package, you probably have multiple processors (cores) on it in a modern system. Those cores probably have effective uniform access to main memory, and if you use the NUMA tools the system would show up as having one NUMA node consisting of all of your processors and all of your main memory.
There's a discussion of NUMA in the Kernel developers' documentation at https://www.kernel.org/doc/html/v5.0/vm/numa.html that's pretty deep but that might be helpful to you.
The NUMA tools will work on systems that have only one NUMA node (making their memory access effectively uniform). You can use a number of tools to see what your current hardware topology is.
lstopo is fun because it has a mode that displays a graphical diagram of your system topology.
On the laptop I'm currently running, it shows that my one socket in this machine has a CPU package with four cores that have two processing units each, for eight schedulable "processors" total. (This is also reflected in /proc/cpuinfo). Each core has its own L1i/L1d and L2 caches, and all four share an L3 cache. They all are using the same NUMA node (node 0) that includes all the memory in the system. So memory access is effectively uniform. There are no other NUMA nodes on this system. You might want to try running that command on your system to see what you get!
This HP whitepaper might also be useful to understand things, it includes some interesting examples of output from numactl --hardware for various server topologies.