With the development of artificial intelligence & internet of things (AIoT) and the rapid advancement of hardware technology, an increasing number of smart speakers are becoming a part of people's lives. Human-computer interaction has also witnessed a shift from remote control to voice control. However, the audio signals recorded by the microphone in a device usually contain considerable noise and interfering voices. Therefore, separation needs to be performed on the signals recorded by the microphones. Frequency-domain independent component analysis (ICA) is a commonly used separation technique, but it faces the permutation indeterminacy problem, i.e., the separated components from Source 1 are classified into a channel for Source 2, whereas the separated components from Source 2 are classified into a channel for Source 1, which greatly deteriorates the separation performance. To address this issue, we proposed an algorithm based on the speech energy ratio, which effectively improved the separation performance. The separation performance was tested on the Signal Separation Evaluation Campaign (SiSEC) and Computational Hearing in Multisource Environments (CHiME) datasets. The results showed that the proposed algorithm outperformed existing algorithms, and a good separation performance for mixed signals could be maintained even in an environment with strong reverberations.