Edge AI Deployment Complete Guide 2026: Strategy, Implementation & Best Practices
Edge AI Deployment Complete Guide 2026: Strategy, Implementation & Best Practices
Edge AI—deploying artificial intelligence models directly on edge devices rather than cloud servers—enables low-latency inference, offline operation, data privacy, and reduced bandwidth costs. As AI models become more efficient and edge hardware more capable, edge AI deployment is expanding from specialized applications to mainstream business use across manufacturing, retail, healthcare, transportation, and smart infrastructure.
This comprehensive guide examines edge AI deployment strategies, hardware platforms, model optimization techniques, implementation best practices, security considerations, and industry applications based on the state of edge AI technology in 2026.
Understanding Edge AI Architecture and Benefits
Edge AI processes data and runs AI inference on devices at or near data sources—sensors, cameras, IoT devices, smartphones, edge servers—rather than transmitting data to centralized cloud servers for processing. This architectural shift provides significant advantages for applications requiring low latency, offline operation, data privacy, or cost reduction.
Core edge AI architectural components:
Edge devices: Hardware running AI inference models ranging from microcontrollers and embedded systems to edge servers and industrial computers. Device capabilities vary dramatically affecting which models can run and at what performance levels.
AI models: Neural networks, machine learning models, and classical algorithms optimized for edge deployment through quantization, pruning, knowledge distillation, or architecture optimization reducing computational requirements while maintaining accuracy.
Inference runtime: Software frameworks executing AI models on edge devices—TensorFlow Lite, ONNX Runtime, PyTorch Mobile, platform-specific runtimes like Apple Core ML, Google ML Kit, providing optimized inference on target hardware.
Edge-cloud hybrid systems: Many deployments use hybrid architectures where edge handles real-time inference while cloud provides model training, updates, monitoring, complex processing, creating balance between edge and cloud capabilities.
Data preprocessing: Edge deployments often include preprocessing pipelines preparing sensor data for model input—image resizing, normalization, feature extraction—optimized for edge hardware constraints.
Team400 designs edge AI architectures balancing performance, cost, and deployment complexity based on specific application requirements and operational constraints.
Key advantages of edge AI deployment:
Low latency: Edge inference provides millisecond-level response times critical for real-time applications like autonomous vehicles, industrial automation, robotics, where cloud round-trip latency is unacceptable.
Offline operation: Edge AI functions without internet connectivity, essential for remote locations, temporary connectivity loss, mission-critical applications requiring continuous operation regardless of network status.
Data privacy: Processing data locally prevents transmitting sensitive information to cloud, addressing privacy regulations, competitive concerns, customer data protection requirements.
Bandwidth efficiency: Transmitting raw sensor data (especially video) to cloud consumes significant bandwidth. Edge processing reduces bandwidth by sending only results, alerts, or aggregated data.
Cost reduction: While edge hardware has upfront costs, eliminating cloud inference costs and bandwidth charges provides long-term savings, especially for high-volume deployments.
Scalability: Edge deployment scales horizontally by adding devices rather than vertically by increasing cloud capacity, distributing computational load across deployment.
Reliability: Distributed edge architecture avoids single points of failure inherent in centralized cloud systems, improving overall system reliability.
Edge AI Hardware Platforms in 2026
Selecting appropriate edge hardware balances computational capability, power consumption, cost, and form factor based on deployment requirements. Edge AI hardware in 2026 ranges from ultra-low-power microcontrollers to powerful edge servers with dedicated AI accelerators.
Microcontroller-based edge AI:
Ultra-low-power microcontrollers running TinyML models enable AI deployment in battery-powered sensors, wearables, and IoT devices where power consumption is critical constraint.
Platform examples: ARM Cortex-M series, ESP32, Arduino Nano 33 BLE Sense, STM32 microcontrollers with neural network extensions.
Capabilities: Simple classification, anomaly detection, keyword spotting, gesture recognition on highly constrained models (typically <100KB).
Use cases: Predictive maintenance sensors, smart home devices, wearable health monitors, industrial sensor networks, battery-powered edge devices.
Application-specific integrated circuits (ASICs) for edge AI:
Purpose-built AI chips optimized for inference provide high performance and energy efficiency for specific AI workloads.
Platform examples: Google Edge TPU, Intel Movidius, Hailo AI processors, Qualcomm AI Engine, providing specialized AI acceleration.
Capabilities: Real-time object detection, pose estimation, image classification, speech recognition with power efficiency superior to general-purpose processors.
Use cases: Security cameras, drones, robotics, autonomous vehicles, retail analytics, smart cameras.
GPU-accelerated edge devices:
Edge devices with dedicated GPUs enable deployment of larger, more complex models approaching cloud-level capabilities at the edge.
Platform examples: NVIDIA Jetson series (Nano, Xavier, Orin), providing CUDA-accelerated inference with substantial onboard compute.
Capabilities: Complex computer vision, multi-model pipelines, real-time video analytics, natural language processing, supporting models up to several hundred megabytes.
Use cases: Autonomous robots, industrial quality inspection, medical imaging, autonomous vehicles, video surveillance with analytics.
Edge servers and gateways:
Rack-mounted or ruggedized edge servers aggregate data from multiple sensors providing substantial computational capacity for factory floors, retail stores, hospitals, smart buildings.
Platform examples: Dell Edge Gateways, HPE Edgeline, Lenovo ThinkEdge, Cisco Edge Intelligence, providing server-class compute at edge locations.
Capabilities: Support for large models, multiple concurrent inference streams, hybrid edge-cloud workloads, complex data processing pipelines.
Use cases: Factory automation, retail store analytics, hospital patient monitoring, building management systems, smart city infrastructure.
Team400 evaluates edge hardware platforms for specific deployment requirements, selecting optimal hardware balancing performance, power, cost, and operational constraints.
Model Optimization for Edge Deployment
AI models trained on cloud infrastructure often require optimization to run effectively on resource-constrained edge devices. Model optimization techniques reduce model size, computational requirements, and memory footprint while preserving accuracy to acceptable levels.
Quantization techniques:
Quantization reduces model precision from 32-bit floating point to lower precision (16-bit float, 8-bit integer, or even lower) significantly reducing model size and computational requirements.
Post-training quantization: Applies quantization to already-trained models without retraining, easiest optimization approach but may cause larger accuracy degradation.
Quantization-aware training: Incorporates quantization effects during training, producing models robust to quantization with minimal accuracy loss.
Dynamic range quantization: Quantizes weights to 8-bit integers while keeping activations as floating point, balancing size reduction and accuracy preservation.
Full integer quantization: Quantizes both weights and activations to 8-bit integers, enabling deployment on integer-only hardware for maximum efficiency.
Typical quantization reduces model size by 2-4x with accuracy degradation under 1% for many models, enabling deployment on hardware unable to run full-precision models.
Model pruning:
Pruning removes unnecessary model parameters—weights with minimal impact on output—creating sparse models requiring less computation and storage.
Unstructured pruning: Removes individual weights based on magnitude or importance, creating irregular sparsity requiring specialized sparse matrix operations.
Structured pruning: Removes entire channels, filters, or layers creating structured sparsity compatible with standard linear algebra libraries.
Iterative magnitude pruning: Gradually removes low-magnitude weights while retraining, preserving accuracy while increasing sparsity.
Pruning can reduce model size and computation by 50-90% depending on model architecture and target accuracy, particularly effective for vision models.
Knowledge distillation:
Knowledge distillation trains smaller “student” models to mimic larger “teacher” models, transferring knowledge from powerful but large models to efficient smaller models suitable for edge deployment.
Process: Train large accurate teacher model, use teacher to generate soft labels for training data, train smaller student model matching teacher’s output distribution rather than hard labels.
Benefits: Student models are often more accurate than models of same size trained directly on labeled data, providing better accuracy-efficiency tradeoff.
Application: Creating mobile-friendly versions of large cloud models, deploying sophisticated reasoning to edge devices with limited computational capability.
Team400 implements model optimization pipelines ensuring edge AI deployments achieve optimal balance between model accuracy, computational efficiency, and deployment constraints.
Neural architecture search for edge:
Automated neural architecture search (NAS) discovers model architectures optimized specifically for target edge hardware, finding architectures balancing accuracy and efficiency better than manually designed models.
Hardware-aware NAS: Searches for architectures optimized for specific hardware platforms (e.g., finding architectures running efficiently on ARM Cortex-M processors or Edge TPU).
Latency-constrained NAS: Searches under latency budgets ensuring discovered architectures meet real-time requirements on target hardware.
Multi-objective NAS: Optimizes for multiple objectives simultaneously—accuracy, latency, model size, power consumption—finding Pareto-optimal architectures.
Popular edge-optimized architecture families discovered through NAS include MobileNet, EfficientNet, EfficientDet, optimized for mobile and edge deployment.
Edge AI Deployment Strategies and Frameworks
Deploying AI models to edge devices requires frameworks converting trained models into formats optimized for target hardware, deployment pipelines managing model distribution and updates, and monitoring systems tracking edge deployment performance.
Inference frameworks for edge:
TensorFlow Lite: Google’s framework for mobile and edge deployment supporting quantization, GPU acceleration, specialized hardware delegates, broad hardware support.
ONNX Runtime: Microsoft’s cross-platform inference engine supporting ONNX model format, hardware acceleration, deployment across diverse edge platforms.
PyTorch Mobile: PyTorch’s mobile deployment framework supporting Android and iOS with mobile-optimized model execution.
Platform-specific runtimes: Apple Core ML, Google ML Kit, Qualcomm Neural Processing SDK, providing optimized inference for specific mobile platforms or hardware.
TensorRT: NVIDIA’s high-performance inference optimizer for GPU-accelerated edge devices like Jetson series.
Framework selection depends on target hardware, model architecture, development language preference, and required deployment features.
Model conversion and optimization pipelines:
Converting trained models to edge-compatible formats involves multiple optimization steps ensuring efficient edge execution.
Typical pipeline: Train model in PyTorch/TensorFlow, export to intermediate format (ONNX), apply optimizations (quantization, pruning), convert to target runtime format (TensorFlow Lite, TensorRT), validate on target hardware, deploy.
Automation: Automated pipelines handle model conversion, optimization, validation, ensuring consistent deployment process and catching issues before production deployment.
Hardware validation: Critical to validate optimized models on actual target hardware, as emulated performance sometimes differs from actual device performance.
Team400 builds automated edge AI deployment pipelines ensuring reliable, consistent model deployment across diverse edge hardware platforms.
Over-the-air (OTA) model updates:
Edge deployments require mechanisms for updating models deployed across potentially thousands of devices without manual intervention.
Update delivery: Secure channels for distributing new models to edge devices, handling bandwidth constraints, connection reliability, device availability.
Version management: Tracking which model versions are deployed on which devices, supporting rollback to previous versions if updates cause issues.
Staged rollout: Deploying updates to device subsets before full rollout, enabling issue detection before widespread deployment.
A/B testing: Running multiple model versions simultaneously on different device subsets, comparing performance to validate improvements.
Monitoring and rollback: Detecting degraded performance from model updates, automatically rolling back problematic updates.
Edge AI Security Considerations
Edge AI deployments face unique security challenges as models run on devices with varying security postures, potentially processing sensitive data, deployed in potentially hostile environments.
Model security and intellectual property protection:
AI models represent valuable intellectual property requiring protection from theft or reverse engineering when deployed to edge devices.
Model encryption: Encrypting models in storage and during transmission prevents unauthorized access to model architectures and parameters.
Secure enclaves: Hardware-backed secure execution environments (ARM TrustZone, Intel SGX) provide protected model execution preventing extraction from running devices.
Model obfuscation: Techniques making models harder to reverse engineer even if accessed, though this provides limited protection against sophisticated attacks.
Licensing and DRM: Mechanisms ensuring models only run on authorized devices, preventing unauthorized redeployment of proprietary models.
Adversarial attack mitigation:
Edge AI models face potential adversarial attacks where malicious inputs cause incorrect predictions, particularly concerning for security-critical applications.
Input validation: Validating inputs for anomalies or known adversarial patterns before feeding to models, detecting obvious manipulation attempts.
Adversarial training: Training models on adversarial examples improving robustness to adversarial perturbations.
Ensemble methods: Using multiple models with diverse architectures making it harder for attackers to craft inputs fooling all models simultaneously.
Monitoring for attacks: Detecting unusual prediction patterns or input distributions suggesting adversarial attacks.
Data privacy and regulatory compliance:
Edge processing addresses privacy by keeping data local, but proper implementation is critical for privacy guarantees and regulatory compliance.
Data minimization: Processing only necessary data at edge, deleting raw data after inference when possible, transmitting only inference results rather than raw inputs.
Federated learning: Training model improvements using edge device data without transmitting raw data to cloud, enabling learning from distributed private data.
Compliance frameworks: Ensuring edge AI deployments comply with GDPR, CCPA, HIPAA, and other privacy regulations requiring appropriate data handling.
Team400 implements comprehensive security strategies for edge AI deployments ensuring model protection, adversarial resilience, and regulatory compliance.
Industry-Specific Edge AI Applications
Different industries deploy edge AI for applications leveraging edge computing benefits—low latency, offline operation, privacy—with industry-specific requirements and use cases.
Manufacturing and industrial edge AI:
Manufacturing deploys edge AI for quality control, predictive maintenance, process optimization, safety monitoring, requiring real-time inference and offline operation in harsh industrial environments.
Quality inspection: Visual inspection systems detecting defects in manufactured products, running complex computer vision models at production line speeds.
Predictive maintenance: Analyzing vibration, temperature, acoustic data from equipment predicting failures before they occur, enabling proactive maintenance scheduling.
Process optimization: Real-time monitoring and optimization of manufacturing processes using AI models responding to changing conditions instantly.
Safety monitoring: Detecting safety hazards, monitoring PPE compliance, identifying dangerous situations in real-time enabling immediate intervention.
Retail edge AI applications:
Retail uses edge AI for customer analytics, inventory management, loss prevention, personalized experiences, processing video and sensor data locally for privacy and real-time response.
Customer analytics: Analyzing customer movement patterns, dwell times, demographics, engagement with products using computer vision while preserving privacy through local processing.
Automated checkout: Cashierless stores using edge AI for product recognition, shopping cart tracking, automated payment processing.
Inventory management: Visual detection of out-of-stock items, shelf organization analysis, automated stock level monitoring.
Loss prevention: Real-time detection of suspicious behaviors, unusual patterns, potential theft enabling immediate security response.
Healthcare and medical edge AI:
Healthcare deploys edge AI in medical devices, diagnostic equipment, patient monitoring systems where low latency, offline operation, and data privacy are critical.
Medical imaging: Edge AI in imaging equipment (ultrasound, X-ray, endoscopy) providing real-time analysis, highlighting areas of concern, assisting diagnosis at point of care.
Patient monitoring: Continuous monitoring of vital signs with AI detecting anomalies, predicting deterioration, alerting medical staff to concerning patterns.
Wearable health devices: Smart watches, fitness trackers, continuous glucose monitors using edge AI for real-time health insights, fall detection, arrhythmia detection.
Remote diagnostics: Portable diagnostic devices with embedded AI enabling sophisticated diagnostics in remote locations without internet connectivity.
Team400 develops industry-specific edge AI solutions addressing unique requirements, constraints, and regulatory considerations for manufacturing, retail, healthcare, and other sectors.
Autonomous vehicles and transportation:
Autonomous vehicles require sophisticated edge AI processing sensor data in real-time for perception, decision-making, control with safety-critical latency requirements.
Perception: Processing lidar, radar, camera data detecting vehicles, pedestrians, road features, traffic signs requiring millisecond-level response times impossible with cloud round-trips.
Path planning: Real-time trajectory planning and obstacle avoidance using AI models running on vehicle edge compute.
Fleet optimization: Commercial fleets using edge AI for route optimization, driver behavior monitoring, predictive maintenance, fuel efficiency optimization.
Smart infrastructure: Traffic management systems using edge AI at intersections, roadways for traffic flow optimization, incident detection, adaptive traffic control.
Edge AI Development and Testing Best Practices
Successful edge AI deployment requires specialized development approaches addressing edge-specific constraints and challenges.
Development workflows for edge AI:
Cloud development, edge deployment: Train models using cloud resources, optimize for edge, deploy to edge devices—most common workflow balancing cloud training capabilities with edge deployment targets.
Hardware-in-the-loop testing: Continuous testing on actual target hardware during development catching edge-specific issues early rather than discovering problems during final deployment.
Edge simulation: Using device simulators and emulators for rapid development iteration before hardware availability, though validation on actual hardware remains essential.
Continuous integration for edge: Automated pipelines testing model updates on target hardware configurations before deployment, ensuring updates maintain performance and accuracy.
Performance benchmarking:
Systematic benchmarking ensures edge AI models meet performance requirements on target hardware before deployment.
Latency measurement: Measuring end-to-end inference latency including preprocessing, model execution, postprocessing on target hardware under realistic conditions.
Throughput testing: Measuring sustained inference rate for applications processing continuous streams, identifying bottlenecks limiting throughput.
Power consumption: Measuring power consumption during inference critical for battery-powered edge devices where excessive power draw reduces operational life.
Memory footprint: Measuring runtime memory requirements ensuring models fit within device memory constraints with margin for other processes.
Accuracy validation: Confirming optimized models maintain required accuracy on representative test data after quantization, pruning, or other optimizations.
Team400 establishes edge AI development workflows and testing protocols ensuring reliable edge deployments meeting performance, accuracy, and resource requirements.
Monitoring and maintenance:
Edge AI deployments require ongoing monitoring and maintenance ensuring continued operation and performance.
Performance monitoring: Tracking inference latency, throughput, resource utilization, detecting performance degradation requiring attention.
Accuracy monitoring: Monitoring prediction distributions, accuracy metrics where ground truth is available, detecting model drift suggesting need for updates.
Device health monitoring: Tracking device status, connectivity, resource availability, enabling proactive maintenance before failures.
Model updating: Systematic processes for deploying model improvements, bug fixes, handling updates across distributed edge deployments.
Cost Considerations for Edge AI Deployment
Edge AI deployment costs include hardware, development, deployment infrastructure, and ongoing maintenance, with different cost structures than cloud AI.
Hardware costs:
Edge hardware costs vary dramatically by capability—from $5 microcontrollers to $1000+ edge servers—with appropriate hardware selection critical to cost-effective deployment.
Per-device costs: Direct hardware costs multiplied by deployment scale, requiring careful hardware selection balancing capability and cost.
Deployment costs: Installation, configuration, physical installation adding to per-device costs especially for physically challenging deployments.
Replacement and maintenance: Expected hardware lifespan, replacement cycles, maintenance costs affecting total cost of ownership.
Development and deployment costs:
Model development: Training models, optimization for edge deployment, testing and validation requiring data science and ML engineering resources.
Integration: Integrating edge AI with existing systems, developing preprocessing pipelines, building monitoring infrastructure.
Deployment infrastructure: OTA update systems, device management platforms, monitoring infrastructure for managing distributed edge deployments.
Ongoing costs:
Model updates: Continued model improvement, retraining with new data, deploying updates to edge devices.
Infrastructure: Cloud infrastructure for model training, device management, data aggregation from edge devices.
Support and maintenance: Managing edge deployments, addressing device failures, updating software, handling scaling.
ROI calculation:
Edge AI ROI depends on value created versus costs, considering both direct financial returns and strategic benefits.
Value quantification: Operational efficiency improvements, quality improvements, new capabilities enabling new revenue, cost savings from reduced cloud usage.
Cost comparison: Edge deployment costs versus cloud inference costs considering inference volume, bandwidth savings, latency improvements.
Scaling economics: How costs and benefits scale with deployment size, often favoring edge as deployment scales due to reduced per-inference costs.
Frequently Asked Questions About Edge AI Deployment
What types of AI models can run on edge devices?
Most AI models can be optimized for edge deployment through quantization, pruning, and architecture optimization. Computer vision models (object detection, image classification, segmentation), natural language processing (keyword spotting, simple classification), time-series models (anomaly detection, predictive maintenance), and smaller transformer models all run effectively on modern edge hardware. Model complexity determines minimum hardware requirements. Team400 optimizes models for specific edge platforms ensuring efficient deployment.
How do I choose between edge AI and cloud AI for my application?
Choose edge AI when you need: low latency (sub-100ms), offline operation, data privacy, high inference volumes making cloud costs prohibitive, or distributed deployments. Choose cloud AI when you need: complex models exceeding edge capability, easy model updates across deployments, centralized data processing, or when edge hardware constraints are prohibitive. Many applications use hybrid approaches. Team400 evaluates architectures based on requirements.
What’s the typical accuracy impact of optimizing models for edge deployment?
Quantization to 8-bit integers typically causes <1% accuracy degradation for well-designed models. Pruning can reduce model size 50-90% with 1-3% accuracy impact. Knowledge distillation produces compact models often within 2-5% of teacher accuracy. Actual impact varies by model architecture, dataset, and optimization techniques. Careful optimization minimizes accuracy impact while maximizing efficiency. Team400 implements optimization pipelines preserving model accuracy.
How do I update AI models deployed on thousands of edge devices?
Implement over-the-air (OTA) update infrastructure supporting: secure model delivery, bandwidth-efficient distribution, version tracking, staged rollout, rollback capabilities, and offline update capability. Use device management platforms (AWS IoT, Azure IoT, Google Cloud IoT) or custom solutions for large deployments. Test updates thoroughly before broad deployment. Team400 builds robust OTA update systems for edge AI.
What security risks does edge AI face and how do I mitigate them?
Edge AI faces: model theft/IP protection, adversarial attacks, data privacy risks, unauthorized access, compromised devices. Mitigate through: model encryption, secure enclaves, adversarial training, input validation, authentication, access control, regular security updates, monitoring for attacks. Security requirements vary by application criticality and deployment environment. Team400 implements comprehensive edge AI security strategies.
What edge AI hardware should I select for my application?
Hardware selection depends on: model complexity, required inference speed, power constraints, cost per device, deployment scale, environmental conditions. Microcontrollers for simple models in power-constrained environments, ASICs for specific AI workloads, GPUs for complex models, edge servers for multiple models or high throughput. Prototype on candidate hardware before large deployments. Team400 provides hardware selection guidance for edge AI deployments.
How much does edge AI deployment cost compared to cloud AI?
Edge AI has higher upfront costs (hardware, deployment) but lower ongoing costs (no per-inference cloud charges). Break-even depends on inference volume, model complexity, required latency. High-volume deployments (>1M inferences monthly) often favor edge economically. Consider both direct costs and value of latency reduction, offline capability, privacy. Team400 provides detailed cost analysis for edge versus cloud deployment.
Can edge AI models learn and improve from data collected at the edge?
Yes, through federated learning where models train on decentralized edge device data without centralizing raw data. This enables privacy-preserving learning from distributed data. Online learning allows models to adapt to local conditions. However, most edge deployments use cloud training with edge inference, periodically updating edge models with cloud-trained improvements. Team400 implements federated learning for edge AI.
What monitoring do I need for production edge AI deployments?
Monitor: inference latency, throughput, accuracy (when ground truth available), resource utilization, device health, connectivity, model version distribution, error rates, data distribution for drift detection. Centralize monitoring data for analysis while respecting privacy constraints. Alert on anomalies requiring intervention. Regular monitoring prevents issues from becoming failures. Team400 designs comprehensive monitoring for edge AI.
How do I handle edge AI deployment at large scale?
Large-scale edge AI requires: automated deployment pipelines, robust OTA update systems, centralized monitoring, device management platforms, efficient debugging and troubleshooting workflows, support infrastructure, clear operational procedures. Start with smaller deployments, validate processes, then scale. Plan for heterogeneous hardware, varying connectivity, device failures. Team400 designs scalable edge AI architectures supporting thousands to millions of devices.
Conclusion: Strategic Edge AI Deployment
Edge AI enables AI applications impossible or impractical with cloud-only approaches—real-time autonomous systems, privacy-preserving analytics, offline AI operation, cost-effective high-volume inference. As edge hardware capabilities increase and model optimization techniques improve, edge AI is expanding from specialized applications to mainstream business deployments.
Successful edge AI deployment requires careful planning balancing model complexity, hardware capabilities, cost constraints, and operational requirements. Organizations must select appropriate hardware, optimize models for edge constraints, implement robust deployment infrastructure, ensure security, and establish monitoring and maintenance processes.
Team400 provides end-to-end edge AI deployment services—from architecture design through model optimization, hardware selection, deployment implementation, and ongoing support. Our expertise in both AI model development and edge computing ensures edge AI deployments deliver business value while meeting technical, operational, and cost requirements.
Whether deploying edge AI for manufacturing quality control, retail analytics, healthcare diagnostics, or autonomous systems, strategic planning and expert implementation enable organizations to realize the full benefits of bringing AI capabilities to the edge.
This comprehensive guide reflects edge AI state in 2026 based on current hardware capabilities, optimization techniques, and deployment best practices. Team400 maintains expertise in latest edge AI technologies and deployment methodologies.