Skip to main content

WEBER AND EWING MOLECULAR THEORY

This theory was first advanced by Weber in 1852 and was, later on, further developed by Ewing in 1890. The basic assumption of this theory is that molecules of all substances are inherently magnets in themselves, each having N and S pole. In an un-magnetized state, it is supposed that these small molecular magnets lie in all sorts of haphazard manner forming more or less closed loops. According to the laws of attraction and repulsion, these closed magnetic circuits are satisfied internally, hence there is no resultant external magnetism exhibited by the iron bar. But when such an iron bar is placed in a magnetic field or under the influence of a magnetizing force, then these molecular magnets start turning round their axes and orientate themselves more or less along straight lines parallel to the direction of the magnetizing force. This linear arrangement of the molecular magnets results in N polarity at one end of the bar and S polarity at the other (seen in figure). As the small magnets turn more nearly in the direction of the magnetizing force, it requires more and more of this force to produce a given turning moment, thus accounting for the magnetic saturation. On this theory, the hysteresis loss is supposed to be due to molecular friction of these turning magnets.



Because of the limited knowledge of molecular structure available at the time of Weber, it was not possible to explain firstly, as to why the molecules themselves are magnets and secondly, why it is impossible to magnetize certain substances like wood etc. The first objection was explained by Ampere who maintained that orbital movement of the electrons round the atom of a molecule constituted a flow of current which, due to its associated magnetic effect, made the molecule a magnet. Later on, it became difficult to explain the phenomenon of diamagnetism (shown by materials like water, quartz, silver and copper etc.) erratic behavior of ferromagnetic (intensely magnetisable) substances like iron, steel, cobalt, nickel and some of their alloys etc. and the paramagnetic (weakly magnetisable) substances like oxygen and aluminum etc. Moreover, it was asked: if molecules of all substances are magnets, then why does not wood or air etc. become magnetized?


All this has been explained satisfactorily by the atom-domain theory which has superseded the molecular theory. It is beyond the scope of this book to go into the details of this theory. The interested reader is advised to refer to some standard book on magnetism. However, it may just be mentioned that this theory takes into account not only the planetary motion of an electron but its rotation about its own axis as well. This latter rotation is called ‘electron spin’. The gyroscopic behavior of an electron gives rise to a magnetic moment which may be either positive or negative. A substance is ferromagnetic or diamagnetic accordingly as there is an excess of unbalanced positive spins or negative spins. Substances like wood or air are non-magnetisable because in their case, the positive and negative electron spins are equal, hence they cancel each other out.

Popular posts from this blog

PRIMARY SECONDARY AND TERTIARY FREQUENCY CONTROL IN POWER SYSTEMS

Primary, Secondary and Tertiary Frequency Control in Power Systems Author: Engr. Aneel Kumar Keywords: frequency control, primary frequency control, automatic generation control (AGC), tertiary control, load-frequency control, grid stability. Frequency control keeps the power grid stable by balancing generation and load. When generation and demand drift apart, system frequency moves away from its nominal value (50 or 60 Hz). Grids rely on three hierarchical control layers — Primary , Secondary (AGC), and Tertiary — to arrest frequency deviation, restore the set-point and optimize generation dispatch. Related: Power System Stability — causes & mitigation Overview of primary, secondary and tertiary frequency control in power systems. ⚡ Primary Frequency Control (Droop Control) Primary control is a fast, local response implemented by generator governors (dro...

CASCADED TRANSFORMERS METHOD FOR GENERATING AC HIGH VOLTAGE

High-Frequency AC High Voltage Generation Using Cascaded Transformers Author: Engr. Aneel Kumar Figure 1: Infographic representation of cascaded transformers method for generating high AC voltages. Introduction In high voltage engineering , generating very high alternating current (AC) voltages is essential for testing equipment like insulators, circuit breakers, power cables, and other apparatus. One common and effective method for producing such voltages is the cascaded transformers method . This technique uses a series connection of specially designed test transformers , where the secondary of one transformer feeds the primary of the next. In this way, voltages are built up step by step, achieving levels in the range of hundreds of kilovolts (kV) or even megavolts (MV). Working Principle The principle of cascaded connection relies on the fact that each...

ADVANTAGES AND DISADVANTAGES OF CORONA EFFECT IN TRANSMISSION LINES | ELECTRICAL ENGINEERING GUIDE

Advantages and Disadvantages of Corona Effect in Power Systems In high-voltage overhead transmission lines , the corona effect plays a critical role in system performance. Corona occurs when the air around a conductor becomes ionized due to high electric stress. While often seen as a drawback because of power losses and interference , it also provides certain engineering benefits . This article explains the advantages and disadvantages of corona effect in detail, with examples relevant to modern electrical power systems. ✅ Advantages of Corona Effect Increase in Virtual Conductor Diameter Due to corona formation, the surrounding air becomes partially conductive, increasing the virtual diameter of the conductor. This reduces electrostatic stress between conductors and minimizes insulation breakdown risks. Related Reading: Electrostatic Fields in High Voltage Engineering Reduction of Transient Surges Corona acts like a natural cushion for sudden ...

ADVANTAGES OF INTERCONNECTED GRID SYSTEM

Interconnected Grid System: Working, Advantages, Disadvantages, and Comparison with Isolated Grids Author: Engr. Aneel Kumar Figure 1: Infographic showing key advantages of an interconnected grid system. Introduction An interconnected grid system refers to a network of multiple power generation sources, transmission lines, substations, and distribution systems that are linked across regions, states, or even countries. Unlike an isolated grid (or islanded grid) which operates independently, an interconnected grid allows electricity to flow between interconnected nodes, enabling numerous benefits and some trade-offs. In today’s energy landscape—where demand, renewable generation, reliability, and cost pressure are all increasing—understanding how an interconnected grid works, what factors are essential, and what its advantages and disadvantages are is critical for utility planners, reg...

FACTORS AFFECTING BATTERY PERFORMANCE

Batteries have limited life, usually showing a slow degradation of capacity until they reach 80 percent of their initial rating, followed by a comparatively rapid failure. Regardless of how or where a UPS is deployed, and what size it is, there are four primary factors that affect battery life: ambient temperature, battery chemistry, cycling and service. 1) AMBIENT TEMPERATURE The rated capacity of a battery is based on an ambient temperature of 25°C (77°F). It’s important to realize that any variation from this operating temperature can alter the battery’s performance and shorten its expected life. To help determine battery life in relation to temperature, remember that for every 8.3°C (15°F) average annual temperature above 25°C (77°F), the life of the battery is reduced by 50 percent. 2) BATTERY CHEMISTRY UPS batteries are electro-chemical devices whose ability to store and deliver power slowly decreases over time. Even if you follow all the guidelines for proper storage, us...

Advantages of Per Unit System in Power System Analysis | Electrical Engineering

  Advantages of Per Unit System in Power System Analysis In electrical power engineering, the per unit (p.u.) system is one of the most widely used techniques for analyzing and modeling power systems. It is a method of expressing electrical quantities — such as voltage, current, power, and impedance — as fractions of chosen base values rather than their actual numerical magnitudes. This normalization technique provides a universal language for system calculations, minimizing errors, simplifying transformer modeling, and enabling consistency across multiple voltage levels. Because of these benefits, the per unit system is essential in fault analysis, load flow studies, transformer testing, and short-circuit calculations . ⚡ What is the Per Unit System? The per unit system is defined as: Q u a n t i t y ( p u ) = A c t u a l   V a l u e B a s e   V a l u e Quantity_{(pu)} = \dfrac{Actual \ Value}{Base \ Value} Q u an t i t y ( p u ) ​ = B a se   ...

AC Transmission Line and Reactive Power Compensation: A Detailed Overview

  Introduction The efficient operation of modern power systems depends significantly on the management of AC transmission lines and reactive power. Reactive power compensation is a vital technique for maintaining voltage stability, improving power transfer capability, and reducing system losses. This article explores the principles of AC transmission lines, the need for reactive power compensation, and its benefits in power systems. Keywords: Reactive Power Compensation Benefits, STATCOM vs SVC Efficiency, Power Transmission Stability Solutions, Voltage Stability in Long-Distance Grids, Dynamic Reactive Power Compensation.      Fundamentals of AC Transmission Lines AC transmission lines are the backbone of modern power systems, connecting generation stations to distribution networks. They have distributed electrical parameters such as resistance ( R R R ), inductance ( L L ), capacitance ( C C ), and conductance ( G G ) along their length. These parameters influence ...