Do I need to re-invent the Unity UI system to implement UI in 3D?

I am building a VR modelling app with Unity and it involves a lot of direct manipulation in VR/3D. Thus, I have created an Input Manager and Selection Manager as well as a bunch of classes/interfaces to enable 3D UI (handles, buttons, sliders, 3D layout, etc.). I suddenly realized that my design resembles Unity’s UI system w/ Interfaces: Selectable, IDeselectHandler, IEventSystemHandler, IMoveHandler, IPointerDownHandler, IPointerEnterHandler, IPointerExitHandler, IPointerUpHandler, ISelectHandler. And realized how stupid it is to re-invent the wheel.
But Selectable is derived from UIBehavior, and it looks like its designed for Canvas/UI elements and would not work well if the objects are not Unity UI objects. But, I have not fully confirmed this.

So has anyone built similar stuff and used the Unity UI System? Or is this just the way it is and I need to build my own?
Note that I use Unity UI system for traditional menus and buttons (world space of course). And my InputManager needs to check every incoming event to see if its a Unity UI event or my event, even though the semantics are practically identical. 'Tis ugly.

TextMeshPro