Complex assembly tasks involve nonlinear and low-clearance insertion trajectories with varying contact forces at different stages. For a robot to solve these tasks, it requires a precise and adaptive controller which conventional force control methods cannot provide. Imitation learning is a promising method for learning controllers that can solve the nonlinear trajectories from human demonstrations without needing to explicitly program them into the robot. However, the force profiles obtain from human demonstration via tele-operation tend to be sub-optimal for complex assembly tasks, thus it is undesirable to imitate such force profiles. Reinforcement learning learns adaptive control policies through interactions with the environment but struggles with low sample efficiency and equipment tear and wear in the physical world. To address these problems, we present a combined learning-based framework to solve complex robotic assembly tasks from human demonstrations via hybrid trajectory learning and force learning. The main contribution of this work is the development of a framework that combines imitation learning, to learn the nominal motion trajectory, with a reinforcement learning-based force control scheme to learn an optimal force control policy, which can satisfy the nominal trajectory while adapting to the force requirement of the assembly task. To further improve the imitation learning part, we develop a hierarchical architecture, following the idea of goal-conditioned imitation learning, to generate the trajectory learning policy on the skill level offline. Through experimental validations, we corroborate that the proposed learning-based framework can generate high-quality trajectories and find suitable force control policies which adapt to the tasks’ force requirements more efficiently.