The rise of advanced technologies, such as 6G networks and mobile edge computing, has enabled large-scale real-time integration of distributed edge devices, opening new possibilities for collaborative intelligence applications such as immersive extended reality and autonomous drone fleets. However, the heterogeneity in device capabilities and network conditions often results in performance bottlenecks and reduced efficiency. Split Learning (SL) has served as a critical paradigm to address these challenges, distributing neural network processing across devices and servers to balance computational load and communication costs. However, existing SL frameworks often optimize subtasks, such as data preprocessing, model partitioning, and resource allocation, in isolation, overlooking their coupling relations, where decisions in one subtask inadvertently impact others, leading to performance degradation or inefficiencies. These concerns are exacerbated by dynamic environmental impacts and evolving performance objectives. To overcome these challenges, we propose a hypergraph-aided dynamic model splitting mechanism that models and exploits the relations between subtask optimization with multiple objectives, such as minimizing latency and maximizing accuracy. Additionally, a meta-reinforcement learning algorithm is adopted to pretrain a meta-policy across diverse scenarios, enabling rapid adaptation to dynamic conditions without frequent retraining. Experimental results demonstrate our approach outperforms baseline methods, achieving superior performance under different conditions.